Do we really want AI books, art, film, and music? Effortless art is on the rise

Do we really want AI books, art, film, and music? Effortless art is on the rise

A bespoke film starring you in the leading role, a dream painting in five seconds, or a book about grief that a machine writes for you. Art has never been so easy — and so awkward. Artists, platforms, and tech companies are experimenting enthusiastically with generative AI. Yet I keep returning to the same nagging question: do people really want to be moved by something that has no soul? Can “effortless” simply be beautiful? In this article, I dive into that question.

At the TEFAF art fair in Maastricht, I recently overheard someone standing before a painting say, “There’s so much soul and passion in it.” The Picasso on display was created with great care, effort, and life experience. It isn’t perfect, but it really feels alive. That sense of authenticity is missing from much AI art. An algorithm has no childhood memories, no sorrow, no Monet-like view of a French lily pond. So, critics say, it can’t produce real art.

But reality is more complicated. A 2023 MIT study found that 40% of people cannot distinguish AI-generated art from human-made work. Platforms like ArtStation, Spotify, and TikTok are overflowing with wildly popular AI content — sometimes openly, sometimes covertly. Apparently, origin doesn’t always matter; if the result resonates, that’s all that counts.

Collaborating with the machine
 Fortunately, it’s not a choice between human or machine. More and more creatives see AI not as a threat, but as a creative partner. Dutch DJ Reinier Zonneveld experiments live with AI in his techno sets. Together with an algorithm, he builds beats, reintroduces loops, and improvises based on audience energy. The result is a hybrid set born of two collaborators rather than one! Artists like Sougwen Chung use AI as a brush: they train models on their own work so the machine becomes an extension of their style. It’s not replacement — it’s a new form of collaboration.

People often talk about AI’s dangers in art. But I believe there are equally compelling arguments in its favor. AI can broaden access to creativity. You don’t need an expensive art school education or a record label to make what’s in your head. With generative AI, anyone can shape ideas into text, image, or sound — even without technical expertise. In that sense, AI democratizes art by lowering the barrier to expression.

Writing books
 Over the past few years I’ve published three books, one of which took four years of research. I’m already working on a new one, and I often get the accusation: “You must be letting AI write it for you.” Partly true. I do use AI to conduct research for my books — running analyses on dozens of other books, studies, and discussion forums like Reddit, yielding incredibly interesting insights for my writing. But I still enjoy the actual writing too much — and it genuinely enriches my daily work as a speaker and coach.

Moreover, AI can stimulate rather than stifle human creativity. Artists collaborating with AI are sometimes confronted with unexpected patterns, ideas, or distortions they never would have conceived on their own. That can lead to fresh perspectives on their own work. Oxford professor Marcus du Sautoy sees it as an opportunity:

“AI can jolt us awake from our automatic routines. People often behave like machines, and AI helps pull us out of that.”

You could say the algorithm isn’t there to replace the artist, but to serve as a playful antagonist, challenging you to think further.

Starting a new chapter?
 Then there’s the more philosophical argument I read recently: art has always been a mirror of its time. The Industrial Revolution brought both realism and abstraction. The advent of photography freed painters from the obligation to depict reality. Now, in an era dominated by technology in our daily lives (just look at your phone or the internet), it makes sense that art would respond in kind. Perhaps using AI in art is not the end of an era, but the start of a new chapter.

Concerns
 Still, I have serious concerns about these developments. One of the greatest is the blurring line between real and fake. In the Netflix documentary What Jennifer Did, altered photos were presented as authentic. The filmmaker admitted parts of the images had been manipulated but remained vague about how and with what tools. In my view, this raises not only aesthetic questions but ethical ones. If images no longer represent what was real but only what seems plausible, we undermine trust in visual information — especially in journalistic or documentary contexts.

I also see a real risk of artistic mediocrity — a kind of bias. AI relies on existing data: what’s popular, recognizable, and average. It’s ideally suited to reinforce what we already know. In my experience, it rarely surprises or shocks. Film critic Gwilym Mumford warns of a future of tailor-made AI films in which you star in a romantic comedy with Marilyn Monroe:

“A film that only follows your wishes will never surprise you.”

It’s the unexpected choices of an artist that give art its layers and meaning — something AI doesn’t yet master.

There’s also the economic angle: increasingly, film studios, publishers, and platforms use AI to cut costs. Posters for major series are no longer designed by illustrators but generated in Midjourney. Tyler Perry halted his $800 million studio expansion when he saw Sora:

“Jobs are going to be lost.”

Even music platforms are experimenting with AI DJs. For Reinier Zonneveld, AI is a playful partner on stage — but for many artists, the same technology threatens their livelihood. The question is: who truly benefits from effortless art? And who is quietly displaced?

The core questions we need to start asking ourselves are: what do we seek in a painting, a novel, a film? Solace, wonder, recognition? Does it matter whether that feeling is evoked by a human or a machine? Maybe it does — maybe it doesn’t.

The problem begins when we stop questioning
 Because as long as AI remains a tool and not the story itself, there is room for human expression. The problem only arises when we stop asking questions — when we mindlessly accept that “good enough” is truly good. When we confuse ease with meaning.

“Effortless art” sounds appealing: art without sweat, without struggle, without time pressure. Yet it’s precisely in the effort, the not-knowing, the searching, that the soul resides — the soul that woman at TEFAF spoke of. If art demands nothing of its maker, what does it ask of its audience? Perhaps the value of art lies not in how quickly it’s created, but in how long it lingers within us.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

Je kleding wordt slimmer dan je telefoon… en weet straks eerder dan jij dat je ziek wordt

Je kleding wordt slimmer dan je telefoon… en weet straks eerder dan jij dat je ziek wordt

Wat als je kleding meer doet dan je warmhouden? Wat als het ook meet hoe het met je gaat, je houding verbetert of je helpt om beter te slapen? Dat is geen toekomstmuziek meer. Slimme kleding – ook wel e-textiles of smart wear genoemd – is aan een stevige opmars bezig. En in mijn optiek is het een van de meest tastbare voorbeelden van hoe technologie onze dagelijkse routines langzaam maar zeker verandert.

Wat is het precies? Slimme kleding is kleding waarin technologie is verwerkt. Vaak zijn dat sensoren in de stof, geleidende draden, bluetoothchips of zelfs kleine AI-modules. Daarmee kan je kleding informatie verzamelen over je lichaam, beweging, temperatuur of omgeving. Die gegevens worden doorgestuurd naar een app of systeem dat er iets mee doet – van inzichten geven tot directe feedback.

Wat mij opvalt: de toepassingen zijn ontzettend breed, en lang niet allemaal futuristisch of ver weg. Een paar voorbeelden:

Wearable X ontwikkelde de Nadi X yoga pants: een yogabroek met ingebouwde trillingsmotoren die je zachtjes laat voelen als je je houding moet aanpassen. Geen yogaleraar nodig – je broek coacht je.

Levi’s en Google kwamen met de Commuter Trucker Jacket, een spijkerjas met aanraakgevoelige mouwen. Daarmee kun je swipen om een nummer te skippen of navigatie te activeren zonder je telefoon te pakken. Klinkt als een gimmick, maar het werkt verrassend soepel.

Under Armour’s Athlete Recovery Sleepwear neemt lichaamswarmte op en stuurt het terug in de vorm van infraroodlicht, wat zou bijdragen aan betere spierherstel en diepere slaap. Ik weet niet of het écht werkt – maar het idee is sterk: nachtkleding die actief meewerkt aan je herstel.

Ambiotex heeft een slim shirt dat je anaerobe drempel meet. Interessant voor topsporters, maar in mijn ogen ook voor mensen die bewuster willen trainen of herstellen na een blessure.

Hexoskin biedt shirts die naast hartslag en ademhaling ook je slaappatroon en vermoeidheid monitoren. Ik heb zo’n shirt getest: het voelt als gewone sportkleding, maar de data is verrassend uitgebreid en nuttig.

Neviano, een Frans modemerk, maakt badpakken met een UV-sensor. Als de zonkracht te hoog wordt, krijg je een seintje dat het tijd is voor zonnecrème of schaduw. Praktisch, zeker voor ouders met jonge kinderen.

– En dan zijn er de Sensoria socks, die drukpunten onder je voeten meten tijdens het hardlopen. Via de app krijg je looptechniek-tips om blessures te voorkomen. Dat klinkt misschien klein, maar wie ooit een stressfractuur heeft gehad weet hoe waardevol dit soort inzichten kunnen zijn.

In mijn optiek zit de kracht van slimme kleding vooral in het feit dat je er niets voor hoeft te doen. Je trekt het aan, en het doet z’n werk. Geen extra apparaten, geen handelingen – gewoon je kleren. En dat maakt het, voor mij, interessanter dan veel andere wearables.

Tegelijkertijd zijn er ook duidelijke beperkingen. De prijs ligt nog hoog: een slim jack kost al snel drie tot vijf keer zoveel als een normaal exemplaar. En hoewel de technologie indrukwekkend is, is de gebruikservaring vaak nog net niet soepel genoeg. Batterijen moeten opgeladen worden, connecties vallen soms weg, en wassen is meestal een uitdaging.

Bovendien speelt privacy een steeds grotere rol. Kleding die je hartslag, slaap of zelfs je stressniveau meet… wie heeft toegang tot die data? In mijn ogen moet de sector daar nog volwassen in worden. Want comfort en gezondheid zijn mooi – maar dan wél op jouw voorwaarden.

Mijn eindoordeel? Slimme kleding is geen hype. Het is een logische evolutie in een wereld waarin technologie steeds dichter op ons lijf zit. De voorbeelden die ik net noemde laten zien dat het allang geen toekomstvisie meer is. Maar om écht breed omarmd te worden, moet het betaalbaarder, betrouwbaarder en vooral: menselijker worden. Want slimme kleding moet niet alleen slim zijn – het moet je ook helpen om je beter, vrijer en gezonder te voelen. Dát is voor mij pas echt stijlvol.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

OpenAI is building its own social media platform: smart move or peak AI hype?

OpenAI is building its own social media platform: smart move or peak AI hype?

OpenAI – the company behind ChatGPT – is reportedly working on its own social media platform. This news recently surfaced through a series of tweets, leaks, and lively discussions on forums like Hacker News. What started as a rumor now seems more serious than initially thought. Allegedly, there is already an internal prototype where users can post AI-generated images in an Instagram-like feed. In this article, I’ll dive into it!

My first question, probably like many others: why on earth would an AI company dive into the already saturated world of social media? Is this a brilliant strategic idea, or an expensive distraction from their core mission? The first signals about this plan came from The Verge, which reported that OpenAI is working on a social platform where image generation via ChatGPT plays a central role.

Imagine: you create a Ghibli-style AI image, click ‘post,’ and share it with your followers in a feed. Sam Altman, OpenAI’s CEO, is said to be gathering feedback behind the scenes from trusted insiders about the idea.

That Altman is thinking about this isn’t really surprising. Back in February, he responded to a post about Meta’s AI plans by saying, “Ok fine maybe we’ll do a social app.” It sounded like a joke at the time, but now it appears to be a serious strategy. Combine that with OpenAI’s public experiments with an AI image feed on Sora.com, and the picture becomes clearer.


Why this could actually be a smart move

Over the past few days, I’ve been reading through countless forums like Reddit and Hacker News to form a well-rounded opinion on this plan. I have to admit, there are some interesting angles to it.

  1. Fresh data supply
    OpenAI constantly needs new, real user data to improve its models. As more and more online data disappears behind paywalls or gets contaminated with AI-generated content, having a proprietary stream of genuine user data is invaluable. A social network where people actively post, comment, and share provides exactly that.
  2. Creative co-creation
    Instead of endless political arguments or viral videos, an AI-driven platform could revolve around expression and creativity. Think of posts created through collaboration between humans and machines. That’s a fundamentally different starting point compared to existing networks—and potentially refreshing.
  3. Always a digital sparring partner
    Imagine a platform where you type a thought, and the AI helps make it sharper, more visual, or more engaging. That’s not social media as we know it—that’s a personal assistant for your content creation. There’s real potential here. You can already see hints of this happening on platforms like Instagram and WhatsApp, where similar features are being introduced.

A new network… or a repeat of past mistakes?

Of course, there are also plenty of reasons why this could end up being another Google Plus moment.

  1. An overcrowded market
    We already have X (formerly Twitter), Threads, Mastodon, Bluesky, Reddit, Discord, Instagram… the list is endless. Why would users adopt yet another network, especially one potentially populated by bots? Personally, I’m finding myself reducing the number of active channels just to maintain focus.
  2. AI as a social illusion
    A network filled with posts and conversations created by AIs may sound fascinating, but in my view, it can quickly feel hollow. Many people value social networks because real people are behind them. If that human layer is missing, there’s little emotional value left.
  3. Privacy and trust
    OpenAI is already under fire for using training data without clear consent. If the company also starts reading your social posts—whether you consent or not—how transparent and fair will that be? And who decides what you are allowed to share on a network built by an AI company?
  4. Loss of focus at OpenAI
    Putting on my startup coach hat for a moment: OpenAI is fundamentally an AI research and product company. Building a social network—with all the moderation, growth hacking, community building, and everything else that entails—risks stretching the company too thin. Resources currently devoted to fundamental AI development could be scattered across a project far outside their original expertise. In a market where competition around model quality, reliability, and speed is heating up, this could be a major distraction.

Caught between ambition and a hunger for data

Whether this is a good idea really depends on your perspective. For OpenAI, it’s a smart move strategically: it delivers user data, visibility, and a way to integrate their tools more deeply into everyday life. But for users, the tension is greater. Do we want yet another platform? And more importantly: do we want to fill that platform with our thoughts, images, and conversations, knowing they might be used to train AI systems?

If this network manages to create a more positive, creative, and AI-assisted form of online interaction, it could add real value. But if it becomes just another smart data trap, filled with AI content and lacking a human soul, people will likely walk away as quickly as they arrived.


When will we hear more?

For now, nothing has been officially announced, but the direction is clear. The AI image feed on Sora seems to be the first public test. It’s likely that in the coming months, OpenAI will experiment with small features within ChatGPT itself—like sharing generated content with others—before potentially launching a separate app.

Whether this new network can truly attract users comes down to one thing: does it genuinely enhance the way we interact online? If it’s just an AI-coated version of existing networks, the hype will fade fast. But if OpenAI succeeds in giving co-creation and AI interaction a more human face, it could very well carve out a whole new kind of online space.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

The Battle for AI Supremacy: Manus as China’s Boldest Move Yet?

The Battle for AI Supremacy: Manus as China’s Boldest Move Yet?

The race for AI dominance is starting to look more and more like a digital Cold War between the US and China. While the US has long led the way with companies like OpenAI and Google, China is now making serious moves to close the gap. Earlier this year, the launch of DeepSeek already shook up the sector—and now there’s Manus: an AI agent that not only responds to commands but can also perform tasks independently. I finally got the chance to test this tool and share my experience in this new article.

These days, we’re seeing weekly updates from major AI platforms. Faster, smarter, with improved reasoning and source referencing. I previously wrote about my mixed experiences with AI agents like GPT Operator and DeepSeek. More recently, I saw countless intriguing use cases pop up featuring the new Chinese AI-hype: Manus. In my view, Manus is no longer just a chatbot—it’s an AI that can think, plan, and execute without constant human supervision.

Manus positions itself as a direct challenger to OpenAI’s Operator and DeepResearch. Some have even called its launch a “second DeepSeek moment,” not just because of its sharp user pricing but also due to the quality of its output. But is this really the next big breakthrough—or just smart marketing?

Jack-of-all-Manus

Manus is an AI agent developed by the Chinese company Butterfly Effect. What sets the tool apart is its so-called multi-agent architecture. Instead of one AI model trying to do everything, Manus uses a combination of specialized sub-agents. This allows tasks to be split and handled more efficiently.

In theory, Manus could plan an entire trip: search for flights, book hotels, compare prices, and even create a travel itinerary. Or assist a recruiter by analyzing résumés, ranking candidates, and drafting interview questions. This is a fundamentally different approach from classic chatbots like ChatGPT, which still require human supervision to complete most tasks accurately.

Much like DeepSeek—proving that high-quality AI models can be built with a fraction of Western budgets—Manus aims to make a similar impact in the AI-agent category. Which, in my view, is still the biggest AI trend of the year.

Why the Hype Around Manus?

Manus isn’t just technologically interesting—it’s also a marketing masterpiece. The first demo videos went viral instantly, showing off an AI that could independently manage complex tasks.

It struck a chord: within days, the Manus community grew to over 180,000 members on Discord, with invite codes selling online for thousands of dollars.

That exclusivity—invite-only access—helped build the hype even more. Just like with DeepSeek, it created the impression that Manus was a gamechanger, available only to a select few. On top of that, the geopolitical rivalry between China and the US plays a strong role. Many Chinese users see Manus as a symbol of technological independence and a direct response to OpenAI and Google.

But the real question remains: does it actually work, and does it deliver on its promises?

Three Types of Tests with Manus

To test the promises of Manus, I put the platform through three real-world tasks I’m currently working on.

1. Booking a Vacation

I’m planning to go hiking in Iceland this June. But I have some specific preferences: a different hike each day, specific flights, a certain type of lodging, and side activities like spa visits. Manus was able to find flights and list some good hotels, but couldn’t complete the bookings. It also missed key details like baggage fees for my hiking gear.

2. Creating a Marketing Campaign

For one of my businesses, I asked Manus to set up a complete social media strategy, including ads and audience analysis. The results were surprisingly impressive: Manus analyzed competitors, created a posting schedule, and even generated ad copy. But after a review, some suggestions turned out to be unrealistic or based on outdated data. Bummer!

3. Automating a Recruitment Process

For a large event I’m organizing, I wanted Manus to help select from volunteers who submitted applications. While the AI gave solid suggestions, a deeper look revealed that some rejections were unfair. The system struggled with nuance in work experience and favored keyword-heavy résumés over actual qualifications.

Execution Falls Short

Manus is great at structuring and planning tasks efficiently, but in my opinion, its execution still leaves much to be desired. It’s not a fully autonomous AI, but more like a clever assistant that can take over parts of tasks—but still needs human oversight.

The tool struggles with reliability. On various forums and group chats, I saw users reporting that the AI would get stuck in infinite loops or generate incorrect information. This is a major issue for applications where precision matters—such as financial analysis for crypto trading.

Speed is another weak point. While OpenAI’s DeepResearch completes tasks in seconds, Manus often takes minutes. I tested this a few times, and for more complex tasks, it took quite a while to generate a usable result.

There’s also a lack of transparency. Butterfly Effect gives little detail on how the AI actually works. It’s not a fully new tool either, but a so-called “wrapper” built on existing models like Anthropic’s Claude and Alibaba’s Qwen. How much of it is truly innovative remains unclear.

And then there’s the issue of privacy and security. Just like with DeepSeek, Manus raises concerns about data protection. Western companies will likely be hesitant to grant a Chinese AI access to sensitive business information—especially given China’s strict regulations on data collection and state control. Not to mention the recent backlash surrounding DeepSeek.

Will We Keep Hearing About Manus?

Manus AI has the potential to usher in a new era of autonomous AI agents—but it’s not quite there yet. The technology is promising, but far from flawless. It feels like a rough diamond that still needs a lot of polishing before it can truly compete with established players.

If Butterfly Effect improves its infrastructure, increases reliability, and becomes more transparent about how Manus works, it could become a serious contender in the AI race—especially at its current price point. Also because it’s far easier to use than GPT’s Operator. Until then, Manus remains a fascinating experiment—with tons of potential, but also plenty of work to be done.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

The Road to AGI: How Close Are We to Human-Level Artificial Intelligence?

The Road to AGI: How Close Are We to Human-Level Artificial Intelligence?

The idea of artificial intelligence being as smart and versatile as a human being sounds like science fiction. But with the rapid pace of AI development, more and more experts are asking: how long will it take before we reach Artificial General Intelligence (AGI)? The moment an AI can reason independently, invent new concepts, and adapt as flexibly as we do. In this article, I dive into that question.

Recently, I had a discussion about AI with my father (69, amateur chess player), and before long we were talking about Garry Kasparov. In the 1990s, Kasparov was a world-renowned chess grandmaster, famous for his playing style and numerous victories. But in 1997, something happened that shocked the world: he lost to IBM’s supercomputer Deep Blue. For the first time, a world chess champion was defeated by a computer—something many had thought impossible. Kasparov was stunned and later said: “I felt a new kind of intelligence, a spirit in the machine.”

Yet Deep Blue wasn’t what we’d now call AGI. The supercomputer could only play chess. But the moment marked a turning point: technology began outperforming humans at specific tasks. It sparked a debate about the limits of artificial intelligence. Today, I see the same discussion flaring up again—but now it’s not just about winning a game, but disrupting entire industries.

When Will We Have AGI?

Predictions on when we’ll achieve AGI vary widely. Some researchers, like Dario Amodei of Anthropic, believe we’ll see systems with early AGI traits as soon as 2026. Others, like AI pioneer Geoffrey Hinton, think it might take five to twenty years. Quite a margin…

But not everyone is convinced we’ll ever get there. Yann LeCun, the highly respected AI researcher at Meta, argues that AGI is still decades away. He even suggests it may never be possible in the way people imagine.

Demis Hassabis, CEO of DeepMind, is more cautious in his forecasts: “I think human-like reasoning in AI is possible within a decade, but it’s far from certain. We still need to make fundamental breakthroughs in our understanding of intelligence.”

Zooming out, though, it’s clear to me that AI is already having a huge impact—on individuals, organizations, and entire industries. Both positive and negative. Whether AGI arrives soon or not, the boundaries of what AI can do are already expanding rapidly.

From Narrow AI to General Intelligence

Today’s AI systems, like GPT-4 and Gemini, are impressively versatile. I’m continually amazed by how these LLMs support and amplify my daily work. But these models are still specialized. They can generate text, write code, create images—but all within clearly defined boundaries. A language model like GPT can’t perform complex financial analysis like Bloomberg’s AI. McKinsey’s AI can’t compose music or analyze medical scans.

AGI would need to combine all of these skills into one system. An AI as flexible as a human would learn from experience, adapt to new problems, and perform tasks it was never explicitly trained for.

That’s a massive leap beyond today’s AI. Sam Altman, CEO of OpenAI, calls AGI “the ultimate technological leap” and says: “Once we reach AGI, it will become the most powerful tool humanity has ever created.”

So Where Are We in the AGI Race?

While AGI is still a thing of the future in my view, we’re already seeing AI systems perform tasks that were recently thought impossible.

AI models like GPT-4 and Gemini outperform most humans on complex exams. OpenAI’s GPT-4 scored in the top 10% on the Uniform Bar Exam for U.S. lawyers, and DeepMind’s Med-PaLM can answer medical questions at the level of an experienced doctor. These systems not only provide correct answers but also reason through complex problems, spot patterns in data, and even generate hypotheses.

AI’s ability to independently solve problems and make connections grows with each version. AlphaFold, a breakthrough from DeepMind, predicted the 3D structure of almost all known proteins—a problem researchers, like my younger brother, had struggled with for decades. To me, this proves that AI already functions as an intelligent system, going beyond simple pattern recognition.

Geoffrey Hinton, one of the founding fathers of deep learning, says: “We’re reaching a point where AI is starting to learn like humans do. That’s both exciting and worrying.”

But despite this progress, AI models are still limited. They lack motivation, can’t develop abstract concepts like humans do, and rely heavily on vast amounts of training data. This makes the leap to Artificial General Intelligence (AGI) complex.

What’s the Next Step in AGI Development?

Looking at current challenges and developments, AGI remains, in my view, an ambitious goal for now. Think of the technologies we’ve developed over the past century. Progress came in gradual steps—from the lightbulb to the internet, from the first computer to smartphones. But AGI is a different story. It’s not a matter of incremental improvements; it’s a bold leap into a fundamentally new reality.

Sam Altman recently said: “We’re now confident we know how to build AGI.” And not decades from now—but possibly within Trump’s next presidential term, meaning in just 3.5 years. His prediction no longer feels like science fiction. The computing power, models, and scalability show that final barriers are falling faster than expected.

Artificial General Intelligence  (AGI) won’t arrive overnight, but the first systems that resemble it are already on the horizon. If the predictions are right, it won’t be long before we’ll have to ask ourselves: how do we collaborate with an intelligence that can outperform us in every domain?

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

Want to Make AI Think Better? Try Chain-of-Thought Prompting!

Want to Make AI Think Better? Try Chain-of-Thought Prompting!

Large language models are great at tasks like writing and translation, but they often struggle with complex problems like math and logical reasoning. That’s because they don’t naturally think step by step. In this article, I’ll show you how you can make them reason step by step—with a surprisingly simple trick.

For a long time, we assumed that AI models just needed more data and more computing power to improve. But even with major advances in language understanding, models kept struggling with complex reasoning—math, logic puzzles, you name it. They often produced answers that sounded right but turned out to be complete nonsense.

The Chain of Thought

This began to change when researchers discovered a clever trick: Chain-of-Thought (CoT) prompting. Instead of asking the model to give a direct answer, they simply added the phrase: “Let’s solve this step by step.” Suddenly, the AI started breaking problems down logically and producing more accurate answers.

I previously wrote about the Chinese challenger to GPT, Llama, and Gemini: DeepSeek. In my view, CoT became a real hype when DeepSeek showcased how powerful CoT prompting can be—because the model is trained to apply this step-by-step reasoning by default.

This made it easier for everyone to get AI to truly think, instead of just guessing. CoT prompting is now seen as one of the most effective ways to make AI models smarter and more reliable. Whether it’s for math, customer service, or business analysis—AI can now reason, all thanks to a simple but brilliant prompting technique.

The Art of Prompting AI

We’ve now seen various prompting styles emerge.

Zero-shot prompting

You give the model a task with no examples. This works well for simple tasks, but not for complex problems.

Example: “Write a poem about AI.”
The model generates a poem without further guidance.

Few-shot prompting

You give a few examples to help the model understand the structure of the task. This is useful for more structured outputs like summaries or translations.

Example: “Here are two article summaries. Use the same style to summarize this next one.”

Active prompting

You evaluate the model’s output and provide feedback so it can improve. I’ve sometimes spent an hour having this kind of back-and-forth.

Example: “This answer isn’t precise enough. Please give a more detailed explanation and rewrite the conclusion.”

How Can You Use Chain-of-Thought Prompting Yourself?

Lately, I’ve been experimenting a lot with CoT, and these tips work really well for me:

1. Use a step-by-step prompt

Add “Let’s think through this step by step” to your prompt to encourage logical reasoning.

Example: “What’s the square root of 144? Let’s solve this step by step.”

2. Provide a good example

Let the model learn from a carefully worked-out reasoning process.

Example: “Here’s how to do a budget analysis: first, list all income sources, then subtract expenses…”

3. Let the model generate multiple answers

Compare the outputs and choose the most consistent one.

Example: “Give three different summaries of this text and select the best one.”

4. Use active prompting

Give feedback and let the model correct its mistake.

Example: “You skipped the third step. Try again and include that step.”

Not All Models Handle Chain-of-Thought Prompting Well

In my experience, not all models respond well to CoT. Research also shows that CoT prompting works best with large models (100+ billion parameters) like GPT-4 and DeepSeek. Smaller models struggle with long, logical reasoning chains.

Here are a few other important factors in using CoT effectively:

  • Self-consistency: Let the model solve the same problem multiple times and pick the most logical answer. This helps reduce errors and leads to more reliable responses.
  • Robustness: CoT prompting works even if your examples aren’t perfectly worded. You don’t need flawless language to get results.
  • Prompt sensitivity: A poorly written prompt can ruin your CoT attempt. Make sure your instructions are clear and your question is well-defined.
  • Coherence: The steps should logically follow one another. If a step is missing or flawed, the final conclusion may be incorrect.

Chain-of-Thought Prompting is a Game-Changer

In my view, Chain-of-Thought prompting is truly a game-changer for AI. I’ve seen firsthand how much it improves the quality of output. With the right prompts and techniques, you enable AI to think better, provide more accurate answers, and solve complex problems.

Start with simple tasks and gradually introduce step-by-step reasoning. You’ll soon notice that AI not only responds more intelligently, but also reveals insights that would otherwise remain hidden.
Got your own tip? Share it in the comments!

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

Forget the Human—Is Your Marketing AI-Agent Proof?

Forget the Human—Is Your Marketing AI-Agent Proof?

For thousands of years, marketing has focused on people. But through my work with AI agents and tools like GPT Operator, I see a rapid shift in how people discover and buy products. Increasingly, they’re no longer making decisions themselves—AI agents are doing the work for them. How do you adapt to that? That’s what I explore in this article.

From travel planners to shopping assistants, AI models like ChatGPT, Google Gemini, and Meta’s Llama are becoming the intermediaries between consumers and brands. But does this work the way companies hope? Will AI agents really determine which brands consumers choose? Or are companies risking invisibility by blindly optimizing for AI?

AI Will Soon Fill Your Shopping Cart—But Based on What?

According to a Boston Consulting Group study, 28% of consumers already use AI to help choose products like cosmetics. But does that mean AI agents will eventually make all purchase decisions? I think that’s still very much up in the air.

Just look at Google’s featured snippets. For years, I optimized my companies’ SEO strategies to appear at the top of search results. But now that AIs like Gemini and ChatGPT generate their own answers, it’s unclear whether consumers will even click through to websites at all.

The same could happen with AI agents. Businesses might invest huge amounts of time and resources into AI-agent-friendly branding, only to discover that the AI ends up recommending a competitor’s product for unclear reasons.

Another issue is how influenceable AI agents really are. Google, OpenAI, and Meta keep their recommendation algorithms largely secret. I had hoped regulators—especially in the EU—would enforce more transparency, but that hasn’t happened yet. And even if companies figure out which factors count, AI developers can change the rules at any time. We’ve seen this with Google’s ever-changing search algorithm.

One more risk I foresee: AI agents may not recommend the best products, but the ones for which they receive the most data—or financial incentives. Just like search engines and social media can be manipulated through ads and SEO, AI agents could be biased too. That means companies aren’t just competing with each other—they’re competing with the opaque decision-making of the AI itself.

Smart AI, Dumb Choices: How Brands Struggle With AI Recommendations

Here are a few recent examples I came across that highlight how things can go very right—or very wrong.

Ballantine’s Whisky, a product meant for a broad audience, was misclassified by AI agents like Meta’s Llama as a premium product. Why? Because there was a lot of online content about its luxury editions. To correct this perception, Ballantine’s changed its ads and content strategy to emphasize the accessibility of its standard whisky. But it’s still unclear whether the AI actually updated its view.

Klarna launched an AI customer service assistant based on OpenAI technology in early 2024. Within the first month, it handled the workload of 700 full-time employees, drastically reducing customer service costs.

Initially, customers were just as satisfied with the AI as with human agents. But when Klarna expanded the AI to offer product comparisons and recommendations, problems began. The AI gave conflicting advice or favored certain brands based on unclear criteria.

Booking.com and Expedia are experimenting with AI-driven search results, where AI agents suggest options based on preferences and past bookings. Hotels and travel providers are no longer just competing on price and quality—but also on how well their offers get picked up by AI models. This forces businesses to tailor their marketing to criteria used by AI agents, without knowing exactly what those criteria are.

How to Make AI Work For You—Not Against You

AI agents are increasingly deciding what consumers see. That requires a whole new way of thinking. Traditional marketing techniques still matter, in my view, but they must be expanded with strategies tailored to how AI agents process and recommend information.

Maintain a consistent and credible digital presence. AI models rely on everything available about your brand. If conflicting information is online, it can lead to confused—or even negative—AI interpretations of your brand.

Understand how AIs perceive your brand and shape that image proactively. Use tools like Share of Model to analyze how AI agents view your brand. Make sure AI has access to trustworthy sources like articles from respected platforms.

Structure your content for AI crawlers. Just as SEO is important for search engines, structuring content is crucial for AI agents. Use schema.org markup, clear metadata, and fast loading speeds to make your content easier for AI to interpret.

Experiment with prompt influence and online conversations. Research from Carnegie Mellon shows that small changes in how questions are phrased can significantly impact AI recommendations. Test prompts strategically and steer conversations on platforms like Reddit and Quora.

The rise of AI agents is changing how consumers make decisions—but that doesn’t mean companies should blindly chase algorithms. AIs are volatile, evolving, and easily influenced—and often not in favor of the businesses they serve.

Companies that focus solely on AI optimization without a broader strategy risk becoming invisible if AI changes the rules. And those rules are shifting faster than most companies can adapt.

What does matter is a hybrid strategy: stay attractive to AI agents, but don’t lose sight of human connection. People still form emotional bonds with brands—bonds no AI can replicate. The best path forward is to use AI smartly, without losing control of your story.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

GPT Operator as a Personal Assistant: Does It Deliver? My 4 Experiences

GPT Operator as a Personal Assistant: Does It Deliver? My 4 Experiences

In the midst of all the DeepSeek hype, the launch of GPT Operator went almost unnoticed. It was supposed to be GPT’s long-awaited answer to the biggest AI trend of the year. But how does it actually work—and does it live up to expectations? I decided to experiment with Operator and share my experiences in this article.

Back in 1950, Norbert Wiener, the father of cybernetics, wrote in his book The Human Use of Human Beings that automatic machines would one day be able to take over human work. He warned that technology would automate not just physical labor, but also mental tasks.

At the time, this sounded like science fiction—but Wiener already foresaw that machines would become smarter than people expected. As he wrote:

“The automatic machine, when used for production, competes with human labor not on the basis of man’s muscle power, but on the basis of his intelligence.” – Norbert Wiener

AI agents have long been a dream scenario, but GPT Operator feels like a serious step forward. The technology is powered by a new model—the Computer-Using Agent (CUA)—which combines vision and reasoning. And yes, it’s available only to the happy few with a $200/month Pro subscription. But just how smart is this AI? Last weekend, I got to play with Operator via a client in the US. I put it through its paces—and let’s just say the results were… surprising.

Operator is not your typical chatbot. Unlike ChatGPT or Gemini, this tool can actually view web pages, click buttons, type in forms, and complete tasks. In theory, it means you can say: “Hey Operator, book a table for two in Eindhoven,” and boom—it’s done.

But how autonomous is it really? Operator doesn’t use traditional APIs—it uses a built-in browser to visually interpret and interact with websites like a human would. It can collect data, complete tasks, and even work with platforms like OpenTable. Still, I ran into a few limitations along the way.

I gave Operator several tasks to test its capabilities and see whether it could really make a difference in daily life.

Teaching GPT Operator to Make a Reservation

Lately, I’ve forgotten to book restaurants when meeting clients or friends. That’s becoming more problematic now that restaurants are often fully booked. So I gave Operator a task:

“Book a table for two in Eindhoven at restaurant X (name not relevant), Friday night at 7 PM.”

Operator enthusiastically opened the restaurant’s site via OpenTable (a platform most places I visit use). But it quickly ran into problems with the dynamic interface.

  • No login prompt: Operator didn’t ask for my login details, and therefore got stuck at the reservation page. Without logging in, it couldn’t complete the booking.
  • Wrong selection: Instead of checking availability, it stayed on the homepage and selected random options without showing real-time slots.
  • No flexibility: When my first choice (7 PM) wasn’t available, Operator didn’t suggest alternatives. A human would immediately try a different time or restaurant—but Operator just gave up.

After 10 minutes, I had to take over and book it myself. If I hadn’t, I would’ve been stuck without a table again.

Automating Simple Intern Work

After the restaurant test, I tried something more work-related:

“Find 20 popular crypto influencers on YouTube, collect their LinkedIn profiles and email addresses, and put it all into an Excel sheet.”

The first few minutes were genuinely impressive. Operator opened a browser, searched for finance influencers, and started collecting info. But soon, the issues began:

  • Poor search strategy: Instead of searching YouTube directly, it used Bing as the primary source—leading to irrelevant or outdated results. A human would obviously start on YouTube itself, where bios and contact links are listed. Operator didn’t.
  • Hallucinations: Operator started inventing LinkedIn profiles and email addresses. Some contact details were completely fictional and didn’t exist anywhere online. If I had blindly used this data, I would’ve ended up with a long list of useless—or even damaging—leads.
  • Speed issues: Scrolling, clicking, and typing took several seconds per action. After 20 minutes, it had only found 10 influencers—and much of the data was incorrect. A manual search would’ve been faster and far more accurate.

In short: if Operator were an intern, I’d thank them politely… and never hire them again.

Operator as a Personal Shopper

Next, I tested something that often takes up unnecessary time: online shopping for basic things. So I gave Operator this task:

“Order a pack of coffee and a USB-C to USB cable from a major Dutch webshop.”

At first, things went well. Operator searched for the products, added them to the cart, and went to the checkout page. Then came the issues:

  • No payment handling: Operator couldn’t process payment or ask me to step in. So the order remained incomplete.
  • Wrong product match: It selected a USB-C cable, even though I had specifically asked for a USB-C to USB cable.
  • Ignored error messages: When a product was out of stock, Operator didn’t try alternatives. A human would intuitively pick another brand or size—but Operator just stopped.

The result: a half-filled cart and a purchase I still had to complete manually.

Booking Flights at Lightning Speed?

Lastly, I tried the example OpenAI itself often gives: booking a flight. I travel frequently, so I was hopeful. But again, it fell short.

It did, however, show me what Operator is good at: handling simple, repetitive tasks—like placing the same weekly order from the same supplier.

But anyone who has booked a flight knows how many steps are involved. How many choices there are. How useful it is to see if flights are cheaper a few hours earlier. Then there’s seat selection (which varies across planes), meal preferences, luggage options—you name it.

Despite its shortcomings, I still believe Operator has real potential. This is only the first version, and OpenAI will undoubtedly improve its speed and accuracy. Just compare the first version of GPT to what we have today.

Affordable alternatives like DeepSeek could also make this technology more accessible. Other players like Google (with Project Mariner) and Anthropic (with their own Computer Use AI) are working on similar systems. That competition means we’ll likely see even more powerful AI agents soon.

For now? It’s an impressive demo—but not a gamechanger. My job is safe… for now. But ask me again in a year.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

New AI Tool DeepSeek: From Imitator to Pioneer

New AI Tool DeepSeek: From Imitator to Pioneer

In a short time, Chinese startup DeepSeek has rewritten the rules of global AI development. While the United States has long dominated AI innovation across the board, a small company from Hangzhou, China, has caused a global shockwave over the past few days. In this article, I dive into this new model—and spent a few days testing it myself.

The launch of DeepSeek feels like a classic Sputnik moment—an unexpected breakthrough that jolts the world awake and signals the beginning of a new era of technological progress. Just as the Soviet Union launched the first satellite in 1957, sparking the space race, DeepSeek may well mark the beginning of a new phase in the AI race.

I tried an earlier version of DeepSeek back in November, but it didn’t leave much of an impression. This newest release, however, left me stunned—especially when comparing it to the major American players.

How DeepSeek Stands Out from U.S. AI Models

Look at the biggest, most powerful models—GPT, Gemini, LLaMA—and the cloud infrastructure and chips required to run them. Nearly all of it is in the hands of U.S. companies. Out of nowhere, this Chinese startup emerged with an AI model that, on several fronts, outperforms its American competitors:

  • Training costs: While U.S. models reportedly required hundreds of millions of dollars to train, DeepSeek claims to have done it for just $6 million.
  • Performance: In independent benchmark tests, DeepSeek outperformed Meta’s LLaMA 3.1, OpenAI’s GPT-4o, and Anthropic’s Claude Sonnet 3.5 on accuracy—across complex problem-solving, math, and coding.
  • Cost-efficiency: DeepSeek is 98% cheaper to use than GPT or Gemini.

“To see the DeepSeek new model, it’s super impressive in terms of both how they have really effectively done an open-source model that does this inference-time compute, and is super-compute efficient. We should take the developments out of China very, very seriously.” – Satya Nadella, CEO of Microsoft

It’s always worth looking at actual numbers in AI. We’ve been so focused on developments in the U.S. with familiar tools like GPT and Gemini, but behind the scenes, China has been building aggressively. Last year alone, China filed 38,000 AI patents (compared to 6,300 in the U.S.), has the largest active AI user base, and ranks second only to the U.S. in the number of launched AI models.

Antifragility in Action

But what struck me most was something Nassim Taleb—one of my favorite authors—describes as antifragility: systems or entities that grow stronger through stress, limitations, or adversity. China has been severely restricted by U.S. sanctions, especially around chip imports necessary for running AI models. But DeepSeek is a perfect example of antifragility—it was forced to become more creative and efficient, ultimately surpassing those who didn’t face such limitations.

“Necessity is the mother of invention. Because they had to figure out work-arounds, they actually ended up building something a lot more efficient.” – Aravind Srinivas, CEO of Perplexity

The results are already clear. DeepSeek became the most downloaded app in recent days, caused a $1.2 trillion drop in Western AI stock valuations, and made the American Stargate project look outdated by comparison. Meta has reportedly launched multiple “war rooms” to study how DeepSeek developed its model so efficiently—especially after DeepSeek announced it would invest another $60 billion into AI.

More Temu Trash or TikTok Brilliance?

Looking at the model overall, I see several clear advantages over GPT 4o:

  • Open-weight model: DeepSeek-R1 is open-weight—its training data isn’t public, but the algorithms can be studied and modified. That’s not possible with GPT-4o, which is fully closed-source.
  • Chain of Thought (CoT) reasoning: The model solves complex problems step-by-step, much like humans do. This makes it better at multi-step reasoning tasks. In coding tasks, DeepSeek not only provides the code but also explains how components work together—great for beginners.
  • Mixture-of-Experts (MoE) architecture: With 671 billion parameters, only 37 billion are activated per task, making it highly efficient in terms of computing power and energy usage.
  • Open source & low cost: DeepSeek is open-source and largely free to use—unlike GPT’s paid models. It can even run locally (on a MacBook, for example), reducing costs and privacy concerns, and offers cheaper API access.

“We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all.” – Jim Fan, Senior Research Manager at NVIDIA

Battle of the Bots

All that sounds great—but does it actually work better? After a mediocre test back in November, I gave DeepSeek a proper second chance—running side-by-side comparisons with GPT across a range of simple and complex daily tasks.

Where DeepSeek Shines

  • Creativity & adaptability: DeepSeek really stands out in creative tasks. Writing vivid character descriptions or catchy stories felt faster and more natural than with GPT. It easily adapts to the required tone and style—whether formal, playful, or anything in between—while GPT often needed multiple steps or prompts to do the same.
  • Coding help: I tested it with some buggy scripts. DeepSeek spotted issues quickly and offered not only accurate fixes but also clear, beginner-friendly explanations.
  • Speed & efficiency: Thanks to its MoE architecture, DeepSeek delivers fast, detailed responses—even for complex tasks. It was noticeably faster than GPT in most tests.

Where It Falls Short

  • Accuracy on niche topics: For very specific or historical topics, DeepSeek sometimes gave incomplete or incorrect answers. I noticed more hallucinations than with GPT.
  • Handling sensitive content: DeepSeek tends to avoid politically or historically sensitive issues—like the Tiananmen Square protests or the Nanking massacre—likely due to Chinese government influence.
  • Limited support & documentation: DeepSeek’s help resources are far less comprehensive than GPT’s, which can be frustrating for new users looking to get started. I struggled to find decent tutorials or explanations.

Other Cool Use Cases I’ve Seen

  • Easily build an app that scrapes YouTube channels and generates trend reports
  • Watch videos of how the model handles advanced reasoning tasks
  • Create a custom “ready-to-play” game in minutes

“We Recommend Ourselves” Syndrome?

Naturally, there’s some skepticism. The biggest critique? The claim that such a powerful model was trained with so little hardware. And the only report with concrete figures comes—unsurprisingly—from DeepSeek itself.

And since DeepSeek is Chinese, some users worry about how their data might be processed or stored. Especially when handling sensitive information, that concern could become a barrier—even though there’s no concrete evidence of data misuse.

Still, no matter how you look at it, DeepSeek represents a massive leap in AI development. It offers real opportunities for Europe and other regions by lowering the barrier to advanced AI. The focus on efficient models requiring fewer resources makes high-quality AI accessible for small businesses, researchers, and emerging markets—especially important in Europe, where access, transparency, and support for underfunded startups are priorities.

Driving Global Competition

DeepSeek also fuels global AI competition. By pushing forward despite trade restrictions, it shows that innovation doesn’t require unlimited budgets or resources. This could inspire other regions—including Europe—to pursue smarter, more efficient paths to innovation.

In just a few weeks, DeepSeek has accelerated the pace of AI innovation and created a more level playing field. I’m very curious to see how American competitors—and policymakers—will respond.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

10 Concrete Ways to Make ChatGPT Tasks Work for You

10 Concrete Ways to Make ChatGPT Tasks Work for You

ChatGPT is increasingly becoming my virtual buddy, helping out with all sorts of daily tasks. The new Tasks feature is a fantastic addition—it goes way beyond what’s currently possible on, say, the iPhone. In this article, I’ll dive into it.

Agents, AGI… this year could be groundbreaking again in terms of AI developments.
Right now, agents still feel like a distant concept for many professionals. But the newly launched Tasks feature strikes a perfect balance—easily accessible and super practical.

It performs daily tasks automatically, without you needing to think about them. From appointment reminders to meal planning or even generating bedtime stories for your kids. In my view, this new ChatGPT function marks a true evolution—from conversational partner to practical assistant.

You can now schedule and automate tasks by simply entering a task description and timeframe. ChatGPT will then execute the task at your chosen time. The feature is currently available to paid Plus, Team, and Pro users and is part of a broader shift toward AI Agents: systems that can independently execute multi-step processes.

I always approach new features with a healthy dose of skepticism, especially with the question: Does this actually make certain tasks easier or replace ones I’m not great at or don’t enjoy?
With Tasks, I quickly came up with a list of practical uses that I now rely on daily for marketing across my three companies.


The All-Purpose Assistant for Efficient, Up-to-Date Marketing

Here are just a few of the Tasks that make my mornings easier:

  • Every morning, I receive three tips for current social media posts and blogs, based on recent news and even content calendars like Frankwatching’s. I get immediate drafts for posts and blogs, which I can tweak or publish right away.
  • Every morning, I get tips for SEO content and optimization. GPT is connected to tools like Ahrefs and gives me insights into seed keywords and general SEO advice. I’ve submitted my websites and asked GPT to provide 5 daily suggestions for improvement.
  • Every morning, I receive an analysis of my Google Analytics and Google Ads, including optimization tips and 5 interesting insights, which I often use to create new (SEO) content.
  • Every morning, I get an overview of new online reviews for my businesses—plus suggested responses and advice on how to act on feedback. You just need to input the review URLs.
  • Every morning, I get creative business development ideas. This is perhaps the most intense task for GPT, since I ask it to generate unique ideas that I wouldn’t have thought of myself. I also keep refining the prompts to avoid repetition and to get one concrete action tip to follow through the same day.

My New Go-To for Everything

I now use Tasks widely—both personally and professionally. For instance, I don’t read the news, but I do subscribe to newsletters. A lot of them. 52, to be exact. That caused a bit of content stress.

Now, every morning I get a summary of the most important news related to my favorite topics (like AI and crypto), along with trends, developments, and sources. ChatGPT even summarizes long articles so I only read what matters.

I’m also currently preparing for a climb of Manaslu (8200m) in Nepal this September, and every week I get a batch of suggested prep tasks and reminders.

I also belong to the group of people who, once they finally have a free evening, end up endlessly scrolling through Netflix before choosing something. According to research, that scrolling eats up 5 days a year on average. Based on my watchlist, ChatGPT now sends me tailored recommendations. Perfectly aligned with my taste.

For my bi-daily meditation routine, I now get gentle reminders along with a helpful tip and suggested focus of the day.

I’ve also seen plenty of fun examples from others using Tasks for their New Year’s resolutions—like getting a unique, healthy Airfryer recipe every day, or parents getting an original, illustrated bedtime story every night for their kids.


Old Tech in New Bottles?

Yes, you could already set reminders on your calendar, right? But GPT Tasks is different from existing tools like Siri or Apple’s Reminders app—it’s smarter and more versatile. It goes beyond simple reminders and can carry out complex, contextual tasks, like the examples above. It also offers real-time updates, such as newsletter digests, tailored to your preferences.

While traditional tools are mostly passive, GPT Tasks is proactive and context-aware. It generates both tasks and content based on your needs. And just like any prompt: the better your task description, the better the output.

This blend of generative power and flexibility makes GPT Tasks a unique and powerful digital assistant.


Hallucinated Reminders

Of course, GPT Tasks isn’t perfect yet. For instance, I’ve noticed some glitches when trying to plan multiple reminders across different days/times. Sometimes it just repeats your input without delivering a useful result—like a meal planner that simply echoed the prompt.

Since the feature is still in beta, some tasks may unexpectedly fail—like reminders not arriving on time. Also, you’re currently limited to 10 active tasks, which can be restrictive if you want to automate many parts of your day.

That said, OpenAI is expected to roll out follow-up features. With future integrations like Operator and Caterpillar, Tasks might evolve to include ordering groceries or booking trips. It could also eventually connect to external apps or smart home devices—like syncing your calendar with household systems.

admin

Jan Scheele werkt dertien jaar op het snijvlak van deep tech, strategie en leiderschap. Als keynote spreker en dagvoorzitter maakt hij technologie tastbaar voor boardrooms, directieteams en grote podia, zonder de complexiteit te versimpelen of te verbergen achter buzzwords.

Zijn achtergrond ligt in het bouwen. Als CEO van een technologie scale-up, oprichter van meerdere techbedrijven en organisator van meer dan vijftig TED-events wereldwijd zag hij van dichtbij hoe technologische keuzes doorwerken in strategie, governance en cultuur. Vanuit zijn betrokkenheid bij het World Economic Forum en de BCNL Foundation kijkt hij daarbij niet alleen naar wat technisch mogelijk is, maar ook naar wat bestuurlijk houdbaar en maatschappelijk wenselijk is.

Hij publiceerde vijf boeken, waarvan twee Amazon-bestsellers, en schrijft wekelijks over AI, blockchain en de organisatorische gevolgen van deep tech. Zijn blogs bereikten inmiddels meer dan twee miljoen lezers.

Mijn wekelijkse

Shot inspiratie

Elke week ontvangen 400+ mensen een shot deep-tech inspiratie. Ook ontvangen? Schrijf je hier rechts gratis in.

Ik spam nooit en gebruik het mailadres
alleen voor deze nieuwsbrief.

Copyright © 2026 Jan Scheele

Ook elke week een shot deeptech inspiratie?

Meld je aan om elk weekend een gratis shot inspiratie te ontvangen in de mailbox.

Ik spam nooit en gebruik het mailadres
alleen voor deze nieuwsbrief.

Paid Search Marketing
Search Engine Optimization
Email Marketing
Conversion Rate Optimization
Social Media Marketing
Google Shopping
Influencer Marketing
Amazon Shopping
Explore all solutions