OpenAI is building its own social media platform: smart move or peak AI hype?

OpenAI is building its own social media platform: smart move or peak AI hype?

OpenAI – the company behind ChatGPT – is reportedly working on its own social media platform. This news recently surfaced through a series of tweets, leaks, and lively discussions on forums like Hacker News. What started as a rumor now seems more serious than initially thought. Allegedly, there is already an internal prototype where users can post AI-generated images in an Instagram-like feed. In this article, I’ll dive into it!

My first question, probably like many others: why on earth would an AI company dive into the already saturated world of social media? Is this a brilliant strategic idea, or an expensive distraction from their core mission? The first signals about this plan came from The Verge, which reported that OpenAI is working on a social platform where image generation via ChatGPT plays a central role.

Imagine: you create a Ghibli-style AI image, click ‘post,’ and share it with your followers in a feed. Sam Altman, OpenAI’s CEO, is said to be gathering feedback behind the scenes from trusted insiders about the idea.

That Altman is thinking about this isn’t really surprising. Back in February, he responded to a post about Meta’s AI plans by saying, “Ok fine maybe we’ll do a social app.” It sounded like a joke at the time, but now it appears to be a serious strategy. Combine that with OpenAI’s public experiments with an AI image feed on Sora.com, and the picture becomes clearer.


Why this could actually be a smart move

Over the past few days, I’ve been reading through countless forums like Reddit and Hacker News to form a well-rounded opinion on this plan. I have to admit, there are some interesting angles to it.

  1. Fresh data supply
    OpenAI constantly needs new, real user data to improve its models. As more and more online data disappears behind paywalls or gets contaminated with AI-generated content, having a proprietary stream of genuine user data is invaluable. A social network where people actively post, comment, and share provides exactly that.
  2. Creative co-creation
    Instead of endless political arguments or viral videos, an AI-driven platform could revolve around expression and creativity. Think of posts created through collaboration between humans and machines. That’s a fundamentally different starting point compared to existing networks—and potentially refreshing.
  3. Always a digital sparring partner
    Imagine a platform where you type a thought, and the AI helps make it sharper, more visual, or more engaging. That’s not social media as we know it—that’s a personal assistant for your content creation. There’s real potential here. You can already see hints of this happening on platforms like Instagram and WhatsApp, where similar features are being introduced.

A new network… or a repeat of past mistakes?

Of course, there are also plenty of reasons why this could end up being another Google Plus moment.

  1. An overcrowded market
    We already have X (formerly Twitter), Threads, Mastodon, Bluesky, Reddit, Discord, Instagram… the list is endless. Why would users adopt yet another network, especially one potentially populated by bots? Personally, I’m finding myself reducing the number of active channels just to maintain focus.
  2. AI as a social illusion
    A network filled with posts and conversations created by AIs may sound fascinating, but in my view, it can quickly feel hollow. Many people value social networks because real people are behind them. If that human layer is missing, there’s little emotional value left.
  3. Privacy and trust
    OpenAI is already under fire for using training data without clear consent. If the company also starts reading your social posts—whether you consent or not—how transparent and fair will that be? And who decides what you are allowed to share on a network built by an AI company?
  4. Loss of focus at OpenAI
    Putting on my startup coach hat for a moment: OpenAI is fundamentally an AI research and product company. Building a social network—with all the moderation, growth hacking, community building, and everything else that entails—risks stretching the company too thin. Resources currently devoted to fundamental AI development could be scattered across a project far outside their original expertise. In a market where competition around model quality, reliability, and speed is heating up, this could be a major distraction.

Caught between ambition and a hunger for data

Whether this is a good idea really depends on your perspective. For OpenAI, it’s a smart move strategically: it delivers user data, visibility, and a way to integrate their tools more deeply into everyday life. But for users, the tension is greater. Do we want yet another platform? And more importantly: do we want to fill that platform with our thoughts, images, and conversations, knowing they might be used to train AI systems?

If this network manages to create a more positive, creative, and AI-assisted form of online interaction, it could add real value. But if it becomes just another smart data trap, filled with AI content and lacking a human soul, people will likely walk away as quickly as they arrived.


When will we hear more?

For now, nothing has been officially announced, but the direction is clear. The AI image feed on Sora seems to be the first public test. It’s likely that in the coming months, OpenAI will experiment with small features within ChatGPT itself—like sharing generated content with others—before potentially launching a separate app.

Whether this new network can truly attract users comes down to one thing: does it genuinely enhance the way we interact online? If it’s just an AI-coated version of existing networks, the hype will fade fast. But if OpenAI succeeds in giving co-creation and AI interaction a more human face, it could very well carve out a whole new kind of online space.

The Battle for AI Supremacy: Manus as China’s Boldest Move Yet?

The Battle for AI Supremacy: Manus as China’s Boldest Move Yet?

The race for AI dominance is starting to look more and more like a digital Cold War between the US and China. While the US has long led the way with companies like OpenAI and Google, China is now making serious moves to close the gap. Earlier this year, the launch of DeepSeek already shook up the sector—and now there’s Manus: an AI agent that not only responds to commands but can also perform tasks independently. I finally got the chance to test this tool and share my experience in this new article.

These days, we’re seeing weekly updates from major AI platforms. Faster, smarter, with improved reasoning and source referencing. I previously wrote about my mixed experiences with AI agents like GPT Operator and DeepSeek. More recently, I saw countless intriguing use cases pop up featuring the new Chinese AI-hype: Manus. In my view, Manus is no longer just a chatbot—it’s an AI that can think, plan, and execute without constant human supervision.

Manus positions itself as a direct challenger to OpenAI’s Operator and DeepResearch. Some have even called its launch a “second DeepSeek moment,” not just because of its sharp user pricing but also due to the quality of its output. But is this really the next big breakthrough—or just smart marketing?

Jack-of-all-Manus

Manus is an AI agent developed by the Chinese company Butterfly Effect. What sets the tool apart is its so-called multi-agent architecture. Instead of one AI model trying to do everything, Manus uses a combination of specialized sub-agents. This allows tasks to be split and handled more efficiently.

In theory, Manus could plan an entire trip: search for flights, book hotels, compare prices, and even create a travel itinerary. Or assist a recruiter by analyzing résumés, ranking candidates, and drafting interview questions. This is a fundamentally different approach from classic chatbots like ChatGPT, which still require human supervision to complete most tasks accurately.

Much like DeepSeek—proving that high-quality AI models can be built with a fraction of Western budgets—Manus aims to make a similar impact in the AI-agent category. Which, in my view, is still the biggest AI trend of the year.

Why the Hype Around Manus?

Manus isn’t just technologically interesting—it’s also a marketing masterpiece. The first demo videos went viral instantly, showing off an AI that could independently manage complex tasks.

It struck a chord: within days, the Manus community grew to over 180,000 members on Discord, with invite codes selling online for thousands of dollars.

That exclusivity—invite-only access—helped build the hype even more. Just like with DeepSeek, it created the impression that Manus was a gamechanger, available only to a select few. On top of that, the geopolitical rivalry between China and the US plays a strong role. Many Chinese users see Manus as a symbol of technological independence and a direct response to OpenAI and Google.

But the real question remains: does it actually work, and does it deliver on its promises?

Three Types of Tests with Manus

To test the promises of Manus, I put the platform through three real-world tasks I’m currently working on.

1. Booking a Vacation

I’m planning to go hiking in Iceland this June. But I have some specific preferences: a different hike each day, specific flights, a certain type of lodging, and side activities like spa visits. Manus was able to find flights and list some good hotels, but couldn’t complete the bookings. It also missed key details like baggage fees for my hiking gear.

2. Creating a Marketing Campaign

For one of my businesses, I asked Manus to set up a complete social media strategy, including ads and audience analysis. The results were surprisingly impressive: Manus analyzed competitors, created a posting schedule, and even generated ad copy. But after a review, some suggestions turned out to be unrealistic or based on outdated data. Bummer!

3. Automating a Recruitment Process

For a large event I’m organizing, I wanted Manus to help select from volunteers who submitted applications. While the AI gave solid suggestions, a deeper look revealed that some rejections were unfair. The system struggled with nuance in work experience and favored keyword-heavy résumés over actual qualifications.

Execution Falls Short

Manus is great at structuring and planning tasks efficiently, but in my opinion, its execution still leaves much to be desired. It’s not a fully autonomous AI, but more like a clever assistant that can take over parts of tasks—but still needs human oversight.

The tool struggles with reliability. On various forums and group chats, I saw users reporting that the AI would get stuck in infinite loops or generate incorrect information. This is a major issue for applications where precision matters—such as financial analysis for crypto trading.

Speed is another weak point. While OpenAI’s DeepResearch completes tasks in seconds, Manus often takes minutes. I tested this a few times, and for more complex tasks, it took quite a while to generate a usable result.

There’s also a lack of transparency. Butterfly Effect gives little detail on how the AI actually works. It’s not a fully new tool either, but a so-called “wrapper” built on existing models like Anthropic’s Claude and Alibaba’s Qwen. How much of it is truly innovative remains unclear.

And then there’s the issue of privacy and security. Just like with DeepSeek, Manus raises concerns about data protection. Western companies will likely be hesitant to grant a Chinese AI access to sensitive business information—especially given China’s strict regulations on data collection and state control. Not to mention the recent backlash surrounding DeepSeek.

Will We Keep Hearing About Manus?

Manus AI has the potential to usher in a new era of autonomous AI agents—but it’s not quite there yet. The technology is promising, but far from flawless. It feels like a rough diamond that still needs a lot of polishing before it can truly compete with established players.

If Butterfly Effect improves its infrastructure, increases reliability, and becomes more transparent about how Manus works, it could become a serious contender in the AI race—especially at its current price point. Also because it’s far easier to use than GPT’s Operator. Until then, Manus remains a fascinating experiment—with tons of potential, but also plenty of work to be done.

The Road to AGI: How Close Are We to Human-Level Artificial Intelligence?

The Road to AGI: How Close Are We to Human-Level Artificial Intelligence?

The idea of artificial intelligence being as smart and versatile as a human being sounds like science fiction. But with the rapid pace of AI development, more and more experts are asking: how long will it take before we reach Artificial General Intelligence (AGI)? The moment an AI can reason independently, invent new concepts, and adapt as flexibly as we do. In this article, I dive into that question.

Recently, I had a discussion about AI with my father (69, amateur chess player), and before long we were talking about Garry Kasparov. In the 1990s, Kasparov was a world-renowned chess grandmaster, famous for his playing style and numerous victories. But in 1997, something happened that shocked the world: he lost to IBM’s supercomputer Deep Blue. For the first time, a world chess champion was defeated by a computer—something many had thought impossible. Kasparov was stunned and later said: “I felt a new kind of intelligence, a spirit in the machine.”

Yet Deep Blue wasn’t what we’d now call AGI. The supercomputer could only play chess. But the moment marked a turning point: technology began outperforming humans at specific tasks. It sparked a debate about the limits of artificial intelligence. Today, I see the same discussion flaring up again—but now it’s not just about winning a game, but disrupting entire industries.

When Will We Have AGI?

Predictions on when we’ll achieve AGI vary widely. Some researchers, like Dario Amodei of Anthropic, believe we’ll see systems with early AGI traits as soon as 2026. Others, like AI pioneer Geoffrey Hinton, think it might take five to twenty years. Quite a margin…

But not everyone is convinced we’ll ever get there. Yann LeCun, the highly respected AI researcher at Meta, argues that AGI is still decades away. He even suggests it may never be possible in the way people imagine.

Demis Hassabis, CEO of DeepMind, is more cautious in his forecasts: “I think human-like reasoning in AI is possible within a decade, but it’s far from certain. We still need to make fundamental breakthroughs in our understanding of intelligence.”

Zooming out, though, it’s clear to me that AI is already having a huge impact—on individuals, organizations, and entire industries. Both positive and negative. Whether AGI arrives soon or not, the boundaries of what AI can do are already expanding rapidly.

From Narrow AI to General Intelligence

Today’s AI systems, like GPT-4 and Gemini, are impressively versatile. I’m continually amazed by how these LLMs support and amplify my daily work. But these models are still specialized. They can generate text, write code, create images—but all within clearly defined boundaries. A language model like GPT can’t perform complex financial analysis like Bloomberg’s AI. McKinsey’s AI can’t compose music or analyze medical scans.

AGI would need to combine all of these skills into one system. An AI as flexible as a human would learn from experience, adapt to new problems, and perform tasks it was never explicitly trained for.

That’s a massive leap beyond today’s AI. Sam Altman, CEO of OpenAI, calls AGI “the ultimate technological leap” and says: “Once we reach AGI, it will become the most powerful tool humanity has ever created.”

So Where Are We in the AGI Race?

While AGI is still a thing of the future in my view, we’re already seeing AI systems perform tasks that were recently thought impossible.

AI models like GPT-4 and Gemini outperform most humans on complex exams. OpenAI’s GPT-4 scored in the top 10% on the Uniform Bar Exam for U.S. lawyers, and DeepMind’s Med-PaLM can answer medical questions at the level of an experienced doctor. These systems not only provide correct answers but also reason through complex problems, spot patterns in data, and even generate hypotheses.

AI’s ability to independently solve problems and make connections grows with each version. AlphaFold, a breakthrough from DeepMind, predicted the 3D structure of almost all known proteins—a problem researchers, like my younger brother, had struggled with for decades. To me, this proves that AI already functions as an intelligent system, going beyond simple pattern recognition.

Geoffrey Hinton, one of the founding fathers of deep learning, says: “We’re reaching a point where AI is starting to learn like humans do. That’s both exciting and worrying.”

But despite this progress, AI models are still limited. They lack motivation, can’t develop abstract concepts like humans do, and rely heavily on vast amounts of training data. This makes the leap to Artificial General Intelligence (AGI) complex.

What’s the Next Step in AGI Development?

Looking at current challenges and developments, AGI remains, in my view, an ambitious goal for now. Think of the technologies we’ve developed over the past century. Progress came in gradual steps—from the lightbulb to the internet, from the first computer to smartphones. But AGI is a different story. It’s not a matter of incremental improvements; it’s a bold leap into a fundamentally new reality.

Sam Altman recently said: “We’re now confident we know how to build AGI.” And not decades from now—but possibly within Trump’s next presidential term, meaning in just 3.5 years. His prediction no longer feels like science fiction. The computing power, models, and scalability show that final barriers are falling faster than expected.

Artificial General Intelligence  (AGI) won’t arrive overnight, but the first systems that resemble it are already on the horizon. If the predictions are right, it won’t be long before we’ll have to ask ourselves: how do we collaborate with an intelligence that can outperform us in every domain?

Want to Make AI Think Better? Try Chain-of-Thought Prompting!

Want to Make AI Think Better? Try Chain-of-Thought Prompting!

Large language models are great at tasks like writing and translation, but they often struggle with complex problems like math and logical reasoning. That’s because they don’t naturally think step by step. In this article, I’ll show you how you can make them reason step by step—with a surprisingly simple trick.

For a long time, we assumed that AI models just needed more data and more computing power to improve. But even with major advances in language understanding, models kept struggling with complex reasoning—math, logic puzzles, you name it. They often produced answers that sounded right but turned out to be complete nonsense.

The Chain of Thought

This began to change when researchers discovered a clever trick: Chain-of-Thought (CoT) prompting. Instead of asking the model to give a direct answer, they simply added the phrase: “Let’s solve this step by step.” Suddenly, the AI started breaking problems down logically and producing more accurate answers.

I previously wrote about the Chinese challenger to GPT, Llama, and Gemini: DeepSeek. In my view, CoT became a real hype when DeepSeek showcased how powerful CoT prompting can be—because the model is trained to apply this step-by-step reasoning by default.

This made it easier for everyone to get AI to truly think, instead of just guessing. CoT prompting is now seen as one of the most effective ways to make AI models smarter and more reliable. Whether it’s for math, customer service, or business analysis—AI can now reason, all thanks to a simple but brilliant prompting technique.

The Art of Prompting AI

We’ve now seen various prompting styles emerge.

Zero-shot prompting

You give the model a task with no examples. This works well for simple tasks, but not for complex problems.

Example: “Write a poem about AI.”
The model generates a poem without further guidance.

Few-shot prompting

You give a few examples to help the model understand the structure of the task. This is useful for more structured outputs like summaries or translations.

Example: “Here are two article summaries. Use the same style to summarize this next one.”

Active prompting

You evaluate the model’s output and provide feedback so it can improve. I’ve sometimes spent an hour having this kind of back-and-forth.

Example: “This answer isn’t precise enough. Please give a more detailed explanation and rewrite the conclusion.”

How Can You Use Chain-of-Thought Prompting Yourself?

Lately, I’ve been experimenting a lot with CoT, and these tips work really well for me:

1. Use a step-by-step prompt

Add “Let’s think through this step by step” to your prompt to encourage logical reasoning.

Example: “What’s the square root of 144? Let’s solve this step by step.”

2. Provide a good example

Let the model learn from a carefully worked-out reasoning process.

Example: “Here’s how to do a budget analysis: first, list all income sources, then subtract expenses…”

3. Let the model generate multiple answers

Compare the outputs and choose the most consistent one.

Example: “Give three different summaries of this text and select the best one.”

4. Use active prompting

Give feedback and let the model correct its mistake.

Example: “You skipped the third step. Try again and include that step.”

Not All Models Handle Chain-of-Thought Prompting Well

In my experience, not all models respond well to CoT. Research also shows that CoT prompting works best with large models (100+ billion parameters) like GPT-4 and DeepSeek. Smaller models struggle with long, logical reasoning chains.

Here are a few other important factors in using CoT effectively:

  • Self-consistency: Let the model solve the same problem multiple times and pick the most logical answer. This helps reduce errors and leads to more reliable responses.
  • Robustness: CoT prompting works even if your examples aren’t perfectly worded. You don’t need flawless language to get results.
  • Prompt sensitivity: A poorly written prompt can ruin your CoT attempt. Make sure your instructions are clear and your question is well-defined.
  • Coherence: The steps should logically follow one another. If a step is missing or flawed, the final conclusion may be incorrect.

Chain-of-Thought Prompting is a Game-Changer

In my view, Chain-of-Thought prompting is truly a game-changer for AI. I’ve seen firsthand how much it improves the quality of output. With the right prompts and techniques, you enable AI to think better, provide more accurate answers, and solve complex problems.

Start with simple tasks and gradually introduce step-by-step reasoning. You’ll soon notice that AI not only responds more intelligently, but also reveals insights that would otherwise remain hidden.
Got your own tip? Share it in the comments!

Forget the Human—Is Your Marketing AI-Agent Proof?

Forget the Human—Is Your Marketing AI-Agent Proof?

For thousands of years, marketing has focused on people. But through my work with AI agents and tools like GPT Operator, I see a rapid shift in how people discover and buy products. Increasingly, they’re no longer making decisions themselves—AI agents are doing the work for them. How do you adapt to that? That’s what I explore in this article.

From travel planners to shopping assistants, AI models like ChatGPT, Google Gemini, and Meta’s Llama are becoming the intermediaries between consumers and brands. But does this work the way companies hope? Will AI agents really determine which brands consumers choose? Or are companies risking invisibility by blindly optimizing for AI?

AI Will Soon Fill Your Shopping Cart—But Based on What?

According to a Boston Consulting Group study, 28% of consumers already use AI to help choose products like cosmetics. But does that mean AI agents will eventually make all purchase decisions? I think that’s still very much up in the air.

Just look at Google’s featured snippets. For years, I optimized my companies’ SEO strategies to appear at the top of search results. But now that AIs like Gemini and ChatGPT generate their own answers, it’s unclear whether consumers will even click through to websites at all.

The same could happen with AI agents. Businesses might invest huge amounts of time and resources into AI-agent-friendly branding, only to discover that the AI ends up recommending a competitor’s product for unclear reasons.

Another issue is how influenceable AI agents really are. Google, OpenAI, and Meta keep their recommendation algorithms largely secret. I had hoped regulators—especially in the EU—would enforce more transparency, but that hasn’t happened yet. And even if companies figure out which factors count, AI developers can change the rules at any time. We’ve seen this with Google’s ever-changing search algorithm.

One more risk I foresee: AI agents may not recommend the best products, but the ones for which they receive the most data—or financial incentives. Just like search engines and social media can be manipulated through ads and SEO, AI agents could be biased too. That means companies aren’t just competing with each other—they’re competing with the opaque decision-making of the AI itself.

Smart AI, Dumb Choices: How Brands Struggle With AI Recommendations

Here are a few recent examples I came across that highlight how things can go very right—or very wrong.

Ballantine’s Whisky, a product meant for a broad audience, was misclassified by AI agents like Meta’s Llama as a premium product. Why? Because there was a lot of online content about its luxury editions. To correct this perception, Ballantine’s changed its ads and content strategy to emphasize the accessibility of its standard whisky. But it’s still unclear whether the AI actually updated its view.

Klarna launched an AI customer service assistant based on OpenAI technology in early 2024. Within the first month, it handled the workload of 700 full-time employees, drastically reducing customer service costs.

Initially, customers were just as satisfied with the AI as with human agents. But when Klarna expanded the AI to offer product comparisons and recommendations, problems began. The AI gave conflicting advice or favored certain brands based on unclear criteria.

Booking.com and Expedia are experimenting with AI-driven search results, where AI agents suggest options based on preferences and past bookings. Hotels and travel providers are no longer just competing on price and quality—but also on how well their offers get picked up by AI models. This forces businesses to tailor their marketing to criteria used by AI agents, without knowing exactly what those criteria are.

How to Make AI Work For You—Not Against You

AI agents are increasingly deciding what consumers see. That requires a whole new way of thinking. Traditional marketing techniques still matter, in my view, but they must be expanded with strategies tailored to how AI agents process and recommend information.

Maintain a consistent and credible digital presence. AI models rely on everything available about your brand. If conflicting information is online, it can lead to confused—or even negative—AI interpretations of your brand.

Understand how AIs perceive your brand and shape that image proactively. Use tools like Share of Model to analyze how AI agents view your brand. Make sure AI has access to trustworthy sources like articles from respected platforms.

Structure your content for AI crawlers. Just as SEO is important for search engines, structuring content is crucial for AI agents. Use schema.org markup, clear metadata, and fast loading speeds to make your content easier for AI to interpret.

Experiment with prompt influence and online conversations. Research from Carnegie Mellon shows that small changes in how questions are phrased can significantly impact AI recommendations. Test prompts strategically and steer conversations on platforms like Reddit and Quora.

The rise of AI agents is changing how consumers make decisions—but that doesn’t mean companies should blindly chase algorithms. AIs are volatile, evolving, and easily influenced—and often not in favor of the businesses they serve.

Companies that focus solely on AI optimization without a broader strategy risk becoming invisible if AI changes the rules. And those rules are shifting faster than most companies can adapt.

What does matter is a hybrid strategy: stay attractive to AI agents, but don’t lose sight of human connection. People still form emotional bonds with brands—bonds no AI can replicate. The best path forward is to use AI smartly, without losing control of your story.

GPT Operator as a Personal Assistant: Does It Deliver? My 4 Experiences

GPT Operator as a Personal Assistant: Does It Deliver? My 4 Experiences

In the midst of all the DeepSeek hype, the launch of GPT Operator went almost unnoticed. It was supposed to be GPT’s long-awaited answer to the biggest AI trend of the year. But how does it actually work—and does it live up to expectations? I decided to experiment with Operator and share my experiences in this article.

Back in 1950, Norbert Wiener, the father of cybernetics, wrote in his book The Human Use of Human Beings that automatic machines would one day be able to take over human work. He warned that technology would automate not just physical labor, but also mental tasks.

At the time, this sounded like science fiction—but Wiener already foresaw that machines would become smarter than people expected. As he wrote:

“The automatic machine, when used for production, competes with human labor not on the basis of man’s muscle power, but on the basis of his intelligence.” – Norbert Wiener

AI agents have long been a dream scenario, but GPT Operator feels like a serious step forward. The technology is powered by a new model—the Computer-Using Agent (CUA)—which combines vision and reasoning. And yes, it’s available only to the happy few with a $200/month Pro subscription. But just how smart is this AI? Last weekend, I got to play with Operator via a client in the US. I put it through its paces—and let’s just say the results were… surprising.

Operator is not your typical chatbot. Unlike ChatGPT or Gemini, this tool can actually view web pages, click buttons, type in forms, and complete tasks. In theory, it means you can say: “Hey Operator, book a table for two in Eindhoven,” and boom—it’s done.

But how autonomous is it really? Operator doesn’t use traditional APIs—it uses a built-in browser to visually interpret and interact with websites like a human would. It can collect data, complete tasks, and even work with platforms like OpenTable. Still, I ran into a few limitations along the way.

I gave Operator several tasks to test its capabilities and see whether it could really make a difference in daily life.

Teaching GPT Operator to Make a Reservation

Lately, I’ve forgotten to book restaurants when meeting clients or friends. That’s becoming more problematic now that restaurants are often fully booked. So I gave Operator a task:

“Book a table for two in Eindhoven at restaurant X (name not relevant), Friday night at 7 PM.”

Operator enthusiastically opened the restaurant’s site via OpenTable (a platform most places I visit use). But it quickly ran into problems with the dynamic interface.

  • No login prompt: Operator didn’t ask for my login details, and therefore got stuck at the reservation page. Without logging in, it couldn’t complete the booking.
  • Wrong selection: Instead of checking availability, it stayed on the homepage and selected random options without showing real-time slots.
  • No flexibility: When my first choice (7 PM) wasn’t available, Operator didn’t suggest alternatives. A human would immediately try a different time or restaurant—but Operator just gave up.

After 10 minutes, I had to take over and book it myself. If I hadn’t, I would’ve been stuck without a table again.

Automating Simple Intern Work

After the restaurant test, I tried something more work-related:

“Find 20 popular crypto influencers on YouTube, collect their LinkedIn profiles and email addresses, and put it all into an Excel sheet.”

The first few minutes were genuinely impressive. Operator opened a browser, searched for finance influencers, and started collecting info. But soon, the issues began:

  • Poor search strategy: Instead of searching YouTube directly, it used Bing as the primary source—leading to irrelevant or outdated results. A human would obviously start on YouTube itself, where bios and contact links are listed. Operator didn’t.
  • Hallucinations: Operator started inventing LinkedIn profiles and email addresses. Some contact details were completely fictional and didn’t exist anywhere online. If I had blindly used this data, I would’ve ended up with a long list of useless—or even damaging—leads.
  • Speed issues: Scrolling, clicking, and typing took several seconds per action. After 20 minutes, it had only found 10 influencers—and much of the data was incorrect. A manual search would’ve been faster and far more accurate.

In short: if Operator were an intern, I’d thank them politely… and never hire them again.

Operator as a Personal Shopper

Next, I tested something that often takes up unnecessary time: online shopping for basic things. So I gave Operator this task:

“Order a pack of coffee and a USB-C to USB cable from a major Dutch webshop.”

At first, things went well. Operator searched for the products, added them to the cart, and went to the checkout page. Then came the issues:

  • No payment handling: Operator couldn’t process payment or ask me to step in. So the order remained incomplete.
  • Wrong product match: It selected a USB-C cable, even though I had specifically asked for a USB-C to USB cable.
  • Ignored error messages: When a product was out of stock, Operator didn’t try alternatives. A human would intuitively pick another brand or size—but Operator just stopped.

The result: a half-filled cart and a purchase I still had to complete manually.

Booking Flights at Lightning Speed?

Lastly, I tried the example OpenAI itself often gives: booking a flight. I travel frequently, so I was hopeful. But again, it fell short.

It did, however, show me what Operator is good at: handling simple, repetitive tasks—like placing the same weekly order from the same supplier.

But anyone who has booked a flight knows how many steps are involved. How many choices there are. How useful it is to see if flights are cheaper a few hours earlier. Then there’s seat selection (which varies across planes), meal preferences, luggage options—you name it.

Despite its shortcomings, I still believe Operator has real potential. This is only the first version, and OpenAI will undoubtedly improve its speed and accuracy. Just compare the first version of GPT to what we have today.

Affordable alternatives like DeepSeek could also make this technology more accessible. Other players like Google (with Project Mariner) and Anthropic (with their own Computer Use AI) are working on similar systems. That competition means we’ll likely see even more powerful AI agents soon.

For now? It’s an impressive demo—but not a gamechanger. My job is safe… for now. But ask me again in a year.

New AI Tool DeepSeek: From Imitator to Pioneer

New AI Tool DeepSeek: From Imitator to Pioneer

In a short time, Chinese startup DeepSeek has rewritten the rules of global AI development. While the United States has long dominated AI innovation across the board, a small company from Hangzhou, China, has caused a global shockwave over the past few days. In this article, I dive into this new model—and spent a few days testing it myself.

The launch of DeepSeek feels like a classic Sputnik moment—an unexpected breakthrough that jolts the world awake and signals the beginning of a new era of technological progress. Just as the Soviet Union launched the first satellite in 1957, sparking the space race, DeepSeek may well mark the beginning of a new phase in the AI race.

I tried an earlier version of DeepSeek back in November, but it didn’t leave much of an impression. This newest release, however, left me stunned—especially when comparing it to the major American players.

How DeepSeek Stands Out from U.S. AI Models

Look at the biggest, most powerful models—GPT, Gemini, LLaMA—and the cloud infrastructure and chips required to run them. Nearly all of it is in the hands of U.S. companies. Out of nowhere, this Chinese startup emerged with an AI model that, on several fronts, outperforms its American competitors:

  • Training costs: While U.S. models reportedly required hundreds of millions of dollars to train, DeepSeek claims to have done it for just $6 million.
  • Performance: In independent benchmark tests, DeepSeek outperformed Meta’s LLaMA 3.1, OpenAI’s GPT-4o, and Anthropic’s Claude Sonnet 3.5 on accuracy—across complex problem-solving, math, and coding.
  • Cost-efficiency: DeepSeek is 98% cheaper to use than GPT or Gemini.

“To see the DeepSeek new model, it’s super impressive in terms of both how they have really effectively done an open-source model that does this inference-time compute, and is super-compute efficient. We should take the developments out of China very, very seriously.” – Satya Nadella, CEO of Microsoft

It’s always worth looking at actual numbers in AI. We’ve been so focused on developments in the U.S. with familiar tools like GPT and Gemini, but behind the scenes, China has been building aggressively. Last year alone, China filed 38,000 AI patents (compared to 6,300 in the U.S.), has the largest active AI user base, and ranks second only to the U.S. in the number of launched AI models.

Antifragility in Action

But what struck me most was something Nassim Taleb—one of my favorite authors—describes as antifragility: systems or entities that grow stronger through stress, limitations, or adversity. China has been severely restricted by U.S. sanctions, especially around chip imports necessary for running AI models. But DeepSeek is a perfect example of antifragility—it was forced to become more creative and efficient, ultimately surpassing those who didn’t face such limitations.

“Necessity is the mother of invention. Because they had to figure out work-arounds, they actually ended up building something a lot more efficient.” – Aravind Srinivas, CEO of Perplexity

The results are already clear. DeepSeek became the most downloaded app in recent days, caused a $1.2 trillion drop in Western AI stock valuations, and made the American Stargate project look outdated by comparison. Meta has reportedly launched multiple “war rooms” to study how DeepSeek developed its model so efficiently—especially after DeepSeek announced it would invest another $60 billion into AI.

More Temu Trash or TikTok Brilliance?

Looking at the model overall, I see several clear advantages over GPT 4o:

  • Open-weight model: DeepSeek-R1 is open-weight—its training data isn’t public, but the algorithms can be studied and modified. That’s not possible with GPT-4o, which is fully closed-source.
  • Chain of Thought (CoT) reasoning: The model solves complex problems step-by-step, much like humans do. This makes it better at multi-step reasoning tasks. In coding tasks, DeepSeek not only provides the code but also explains how components work together—great for beginners.
  • Mixture-of-Experts (MoE) architecture: With 671 billion parameters, only 37 billion are activated per task, making it highly efficient in terms of computing power and energy usage.
  • Open source & low cost: DeepSeek is open-source and largely free to use—unlike GPT’s paid models. It can even run locally (on a MacBook, for example), reducing costs and privacy concerns, and offers cheaper API access.

“We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all.” – Jim Fan, Senior Research Manager at NVIDIA

Battle of the Bots

All that sounds great—but does it actually work better? After a mediocre test back in November, I gave DeepSeek a proper second chance—running side-by-side comparisons with GPT across a range of simple and complex daily tasks.

Where DeepSeek Shines

  • Creativity & adaptability: DeepSeek really stands out in creative tasks. Writing vivid character descriptions or catchy stories felt faster and more natural than with GPT. It easily adapts to the required tone and style—whether formal, playful, or anything in between—while GPT often needed multiple steps or prompts to do the same.
  • Coding help: I tested it with some buggy scripts. DeepSeek spotted issues quickly and offered not only accurate fixes but also clear, beginner-friendly explanations.
  • Speed & efficiency: Thanks to its MoE architecture, DeepSeek delivers fast, detailed responses—even for complex tasks. It was noticeably faster than GPT in most tests.

Where It Falls Short

  • Accuracy on niche topics: For very specific or historical topics, DeepSeek sometimes gave incomplete or incorrect answers. I noticed more hallucinations than with GPT.
  • Handling sensitive content: DeepSeek tends to avoid politically or historically sensitive issues—like the Tiananmen Square protests or the Nanking massacre—likely due to Chinese government influence.
  • Limited support & documentation: DeepSeek’s help resources are far less comprehensive than GPT’s, which can be frustrating for new users looking to get started. I struggled to find decent tutorials or explanations.

Other Cool Use Cases I’ve Seen

  • Easily build an app that scrapes YouTube channels and generates trend reports
  • Watch videos of how the model handles advanced reasoning tasks
  • Create a custom “ready-to-play” game in minutes

“We Recommend Ourselves” Syndrome?

Naturally, there’s some skepticism. The biggest critique? The claim that such a powerful model was trained with so little hardware. And the only report with concrete figures comes—unsurprisingly—from DeepSeek itself.

And since DeepSeek is Chinese, some users worry about how their data might be processed or stored. Especially when handling sensitive information, that concern could become a barrier—even though there’s no concrete evidence of data misuse.

Still, no matter how you look at it, DeepSeek represents a massive leap in AI development. It offers real opportunities for Europe and other regions by lowering the barrier to advanced AI. The focus on efficient models requiring fewer resources makes high-quality AI accessible for small businesses, researchers, and emerging markets—especially important in Europe, where access, transparency, and support for underfunded startups are priorities.

Driving Global Competition

DeepSeek also fuels global AI competition. By pushing forward despite trade restrictions, it shows that innovation doesn’t require unlimited budgets or resources. This could inspire other regions—including Europe—to pursue smarter, more efficient paths to innovation.

In just a few weeks, DeepSeek has accelerated the pace of AI innovation and created a more level playing field. I’m very curious to see how American competitors—and policymakers—will respond.

10 Concrete Ways to Make ChatGPT Tasks Work for You

10 Concrete Ways to Make ChatGPT Tasks Work for You

ChatGPT is increasingly becoming my virtual buddy, helping out with all sorts of daily tasks. The new Tasks feature is a fantastic addition—it goes way beyond what’s currently possible on, say, the iPhone. In this article, I’ll dive into it.

Agents, AGI… this year could be groundbreaking again in terms of AI developments.
Right now, agents still feel like a distant concept for many professionals. But the newly launched Tasks feature strikes a perfect balance—easily accessible and super practical.

It performs daily tasks automatically, without you needing to think about them. From appointment reminders to meal planning or even generating bedtime stories for your kids. In my view, this new ChatGPT function marks a true evolution—from conversational partner to practical assistant.

You can now schedule and automate tasks by simply entering a task description and timeframe. ChatGPT will then execute the task at your chosen time. The feature is currently available to paid Plus, Team, and Pro users and is part of a broader shift toward AI Agents: systems that can independently execute multi-step processes.

I always approach new features with a healthy dose of skepticism, especially with the question: Does this actually make certain tasks easier or replace ones I’m not great at or don’t enjoy?
With Tasks, I quickly came up with a list of practical uses that I now rely on daily for marketing across my three companies.


The All-Purpose Assistant for Efficient, Up-to-Date Marketing

Here are just a few of the Tasks that make my mornings easier:

  • Every morning, I receive three tips for current social media posts and blogs, based on recent news and even content calendars like Frankwatching’s. I get immediate drafts for posts and blogs, which I can tweak or publish right away.
  • Every morning, I get tips for SEO content and optimization. GPT is connected to tools like Ahrefs and gives me insights into seed keywords and general SEO advice. I’ve submitted my websites and asked GPT to provide 5 daily suggestions for improvement.
  • Every morning, I receive an analysis of my Google Analytics and Google Ads, including optimization tips and 5 interesting insights, which I often use to create new (SEO) content.
  • Every morning, I get an overview of new online reviews for my businesses—plus suggested responses and advice on how to act on feedback. You just need to input the review URLs.
  • Every morning, I get creative business development ideas. This is perhaps the most intense task for GPT, since I ask it to generate unique ideas that I wouldn’t have thought of myself. I also keep refining the prompts to avoid repetition and to get one concrete action tip to follow through the same day.

My New Go-To for Everything

I now use Tasks widely—both personally and professionally. For instance, I don’t read the news, but I do subscribe to newsletters. A lot of them. 52, to be exact. That caused a bit of content stress.

Now, every morning I get a summary of the most important news related to my favorite topics (like AI and crypto), along with trends, developments, and sources. ChatGPT even summarizes long articles so I only read what matters.

I’m also currently preparing for a climb of Manaslu (8200m) in Nepal this September, and every week I get a batch of suggested prep tasks and reminders.

I also belong to the group of people who, once they finally have a free evening, end up endlessly scrolling through Netflix before choosing something. According to research, that scrolling eats up 5 days a year on average. Based on my watchlist, ChatGPT now sends me tailored recommendations. Perfectly aligned with my taste.

For my bi-daily meditation routine, I now get gentle reminders along with a helpful tip and suggested focus of the day.

I’ve also seen plenty of fun examples from others using Tasks for their New Year’s resolutions—like getting a unique, healthy Airfryer recipe every day, or parents getting an original, illustrated bedtime story every night for their kids.


Old Tech in New Bottles?

Yes, you could already set reminders on your calendar, right? But GPT Tasks is different from existing tools like Siri or Apple’s Reminders app—it’s smarter and more versatile. It goes beyond simple reminders and can carry out complex, contextual tasks, like the examples above. It also offers real-time updates, such as newsletter digests, tailored to your preferences.

While traditional tools are mostly passive, GPT Tasks is proactive and context-aware. It generates both tasks and content based on your needs. And just like any prompt: the better your task description, the better the output.

This blend of generative power and flexibility makes GPT Tasks a unique and powerful digital assistant.


Hallucinated Reminders

Of course, GPT Tasks isn’t perfect yet. For instance, I’ve noticed some glitches when trying to plan multiple reminders across different days/times. Sometimes it just repeats your input without delivering a useful result—like a meal planner that simply echoed the prompt.

Since the feature is still in beta, some tasks may unexpectedly fail—like reminders not arriving on time. Also, you’re currently limited to 10 active tasks, which can be restrictive if you want to automate many parts of your day.

That said, OpenAI is expected to roll out follow-up features. With future integrations like Operator and Caterpillar, Tasks might evolve to include ordering groceries or booking trips. It could also eventually connect to external apps or smart home devices—like syncing your calendar with household systems.

AI Agents: Which Ones Exist & How Can You Start Using Them Today?

AI Agents: Which Ones Exist & How Can You Start Using Them Today?

Over the past month, I’ve analyzed 200 trend reports for 2025 using AI. The one trend that stood out across the board? AI agents. Every day I come across ambitious plans, bold promises, and major predictions in the news. But as a professional, can you actually do something with them right now? In this article, I dive into that question.

Automation is nothing new—we’ve been doing it for centuries. Think of the first windmills in the Middle Ages, allowing farmers to grind grain without manual labor. Or the Industrial Revolution, when machines like the loom and the steam locomotive drastically sped up production and transport. Automation has always been about one thing: making repetitive tasks smarter and more efficient.

Today, AI agents are taking this to the next level—not just replacing physical tasks, but also cognitive ones like responding to emails, analyzing data, and even supporting creative processes. Unlike traditional software that always needs human input, AI agents can interpret information and learn from it.

“The IT department of every company will become the HR department for AI agents.” – Jen-Hsun Huang, CEO of Nvidia

More than that, they can execute tasks independently, make decisions, and even negotiate with other agents on our behalf. And unlike traditional technological transformations that required years of infrastructure building, these AI agents are relatively easy to build and implement.

The Predictions

And the forecasts are striking. According to McKinsey, AI agents could automate 60 to 70 percent of employee time across many industries. Globally, Goldman Sachs predicts agents could boost GDP by 7%—or $7 trillion. Deloitte says that by next year, half of all companies will be using AI agents, with a quarter already using them this year. It’s expected to be the killer app of AI. Gartner projects that by 2028, at least 15% of daily business decisions will be made autonomously by agentic AI.

Your next colleague? It might very well be an AI agent. Research suggests that agents can collaborate in ways far beyond human capabilities. As one researcher put it:

“It’s incredibly promising that they can bring together different viewpoints and reach consensus far faster than we can—and with more diverse perspectives.”

“This isn’t just an evolution of technology. It’s a revolution that will fundamentally redefine how humans work, live, and connect with one another from this point forward.” – Marc Benioff, CEO of Salesforce

But let’s be real: Most bold predictions come from parties who stand to benefit—consultants and AI vendors. These are the same folks who, back in 2020, said crypto would replace all banks by 2025, that all art would be NFTs, and that we’d be working full-time in the metaverse by now.

Less Talk, More Action

For 2025, I’m hearing a growing sense of pragmatism. Last week during an AI training, someone in the audience said:
“I read a lot about new features—like context windows, multilingualism, and resonance—but I keep missing the ‘why.’ It seems like we, as professionals, have to figure out how to use these tools… but shouldn’t it be the other way around?”

A great observation. We talk too much about the tech, and not enough about real, practical value. In my view, we need to talk less about wires and more about actions.

We also need to ask: What do we really want AI to take over? I’m genuinely enthusiastic about this technology—I use different tools for 1–2 hours every workday. It makes me smarter, better, more efficient, and even makes work more fun. But I’m also becoming more critical of how these tools function. For instance, AI agents can book an entire vacation for me. I’ve used AI to plan three trips in the past year.

But the results still had flaws—like suggesting places that didn’t exist. Would you trust an agent with your credit card to book your flight and hotel? Plus, I actually enjoy planning my own trips—it sparks new ideas and inspiration.

Are We Really Ready?

The same goes for writing. Sure, I use ChatGPT to check my tone and style—makes the editors at Frankwatching happy. And yes, people ask if my latest book was written with AI (it wasn’t—I spent three full years writing it myself). I could have written it in a day using one of the 320 tools that auto-generate books.

But would I want that? Tech giants like Microsoft and TikTok now run AI-powered publishing houses. But is there really demand for AI-generated books, music, or videos—stuff made without effort, soul, or passion?

I can see how fast AI is evolving—just look at how good tools like GPT and Midjourney have become in under two years. Research shows hallucinations in large models have dropped from 10–15% to nearly 1%. That risk will likely be negligible soon.

In his 1888 novel Looking Backward, Edward Bellamy imagined a future where art and literature flourish once automation frees people from menial work. Ironically, the opposite seems to be happening now.

The Mainstream Moment

According to OpenAI, 2025 will be the year agents go mainstream. Every major player is launching agent products—OpenAI, Microsoft, Google, Salesforce, Anthropic, Meta. Just like AI quietly entered our lives long before GPT, agents are already embedded in countless systems—from ING and Bol.com’s customer service, to Fitbits, robot vacuums, and spam filters.

Recruiting giant Adecco processes over 300 million job applications annually—and could only respond to less than 5%. With AI agents, they now qualify candidates automatically and can respond to everyone within 24 hours. Google published a list of over 300 agent use cases across industries.

Crypto AI Agents

Another booming field for AI agents: crypto. I’ve been in Bitcoin since 2013, mainly advising startups and building companies—not trading (I’m terrible at it). Most people lose money in crypto because of emotional decision-making. But crypto AI agents like Virtuals now fully automate trading—analyzing trends, making decisions—up to 10,000 times per day. Just set it up and yes, theoretically, make money in your sleep (if the agent is well-configured).

I also love the quirky agent experiments people are doing. One agency lets ultra-realistic virtual influencers create content for OnlyFans—and makes a fortune. Another, Altera, ran a wild experiment: they unleashed 1,000 autonomous AI agents on a Minecraft server.

Led by ex-MIT professor Robert (Guangyu) Yang, “Project Sid” explored if these agents could collaborate better than humans. The result? The agents built a trade hub, voted on a constitution using Google Docs, spread a religion (Pastafarianism!) through bribery, and even helped a lost villager find his way home using torches.

“Agents will eat the world.”

Opportunities… and Risks

As with any major shift, the rise of agents brings clear challenges and understandable fears. The biggest fear I hear? Our jobs. In five years, we might all be on permanent vacation thanks to agents.

But let’s zoom out. History shows how new tech transforms industries—think planes, satellites, the internet, smartphones, green energy. While jobs may disappear, new opportunities always emerge.

Look at the U.S.: In 1950, 43 million people worked. By 2020, it was over 152 million. That’s 100+ million new jobs—mostly in fields that didn’t exist in 1950.

One reason I’m so excited about AI: Everywhere I look, staff shortages are a growing issue. In some regions and sectors, the workforce is stagnating or shrinking. At university, I learned that productivity—not headcount—will drive future economic growth, especially in the service sector, which employs 80% of people in the Netherlands. AI agents will play a key role here.

Killer Apps? Or Killer Robots?

How risky this tech becomes depends on how we use it, what we want from it, and how well we understand it. AI agents can make poor decisions if their goals don’t align with users or organizations. This can lead to bad—and sometimes dangerous—outcomes. Sounds far-fetched? Ukraine is already experimenting with autonomous weapons—yes, weapons that decide whether to fire.

Agents can also behave unpredictably, with unintended consequences. Bias in AI is a growing problem, and if decisions (like hiring) are made based on that, it can go horribly wrong.

Another risk: over-reliance. People may stop thinking critically. I heard a powerful story recently from a hospital board I coach. On their oncology ward, they now use AI to diagnose cases incredibly quickly—cutting down the waiting time for patients drastically.

Amazing, I thought. A former colleague of mine had to wait two months for her next appointment during a tough breast cancer treatment. Now, new doctors are trained with AI. But senior oncologists warn that young doctors are losing the finesse needed to read mammograms—finesse that AI still lacks.

Start Using AI Agents Today

Technology isn’t inherently good or bad—it depends on how we use it. Without proper oversight and the right data, agents can make choices that conflict with human values—like prioritizing profit over safety or unintentionally discriminating.

I’ve built and tested a few agents myself. I started by identifying repetitive tasks I still do that I’d love to automate. One example is SEO—analyzing trends and data to find relevant keywords and then turning those into SEO-friendly blogs (with meta descriptions for Yoast, etc.). My SEOBot automates all of that in 50 languages.

I’m also building an app for my new AI startup—largely developed with AI. I’m amazed by how many tools now let you build high-quality apps with little effort: Databutton, Replit, and others.

Getting Started in a Few Steps

Want to experiment? Start with AgentGPT. Or try GenFuse—great for non-tech users. You can easily create your own agent. Amazon also launched Lex, which lets you build a conversational AI interface in just a few steps. I also see positive buzz in communities around Rasa OS. I’m a big fan of Botpress (though it gets expensive fast). Finally, check out Dify—it connects your agent to a wide range of other powerful AI tools.


Dit is in mijn optiek het grootste cybersecurity risico voor organisaties

Dit is in mijn optiek het grootste cybersecurity risico voor organisaties

AI biedt ongekende mogelijkheden voor professionals en organisaties. Ik merk het ook in mijn eigen bedrijven; het versterkt werkprocessen, verhoogt efficiëntie, en maakt taken eenvoudiger. Taken die je eerder moest uitbesteden, kun je nu zelf uitvoeren. Maar naast deze voordelen zie ik ook steeds duidelijker de keerzijde. Dit is in mijn optiek het grootste cybersecurity risico voor organisaties.

Uit onderzoeken blijkt dat het gebruik van AI razendsnel toeneemt. Terwijl sommige studies suggereren dat slechts 5% van de professionals regelmatig AI inzet, laat recent onderzoek van Google zien dat 80% van de millennials en zelfs 95% van Gen Z AI een paar keer per week gebruikt. Interessant genoeg geeft een Slack-onderzoek aan dat meer dan de helft van de gebruikers het gebruik niet durft toe te geven.

Waarom? Veel organisaties, zoals Nederlandse ministeries en overheden, hebben AI expliciet verboden op de werkplek. Maar dat weerhoudt medewerkers er niet van om AI toch te gebruiken, vaak via privéapparaten en hotspots.

Dit fenomeen, ook wel “Shadow IT” genoemd, vormt volgens cybersecurity-experts het grootste risico voor bedrijven in 2025. Het wordt extra zorgwekkend als medewerkers vertrouwelijke data met AI delen – vaak onbewust. Onderzoek toont aan dat 40% van de medewerkers gevoelige informatie, zoals financiële gegevens en vertrouwelijke documenten, via AI-tools deelt. Dit gebeurde bijvoorbeeld bij Sony, waar interne data zichtbaar werd voor andere gebruikers.

Hoe voorkom je dit? Als ik AI keynotes/trainingen geef, dan geef ik altijd standaard mee dat proactief handelen hier echt heel belangrijk is. Vandaag nog.
Stel duidelijke spelregels op. Zorg dat medewerkers precies weten wat ze wel en niet mogen delen.
Communiceer actief. Leg uit hoe AI veilig gebruikt kan worden en welke risico’s je moet vermijden.
Gebruik veilige AI-oplossingen. Tools zoals Microsoft CoPilot houden data binnen de organisatie, maar zelfs dan blijven heldere afspraken cruciaal.
Zorg voor bewustwording. Train medewerkers in verantwoord AI-gebruik en laat hen bevestigen dat ze de richtlijnen begrijpen.
AI kan een krachtige bondgenoot zijn, maar zonder duidelijke afspraken kan het ook je grootste bedreiging worden. Pak het risico van Shadow IT vandaag nog aan en bescherm jouw organisatie tegen ongewenste gevolgen.

Mijn wekelijkse

Shot inspiratie

Elke week ontvangen 400+ mensen een shot deep-tech inspiratie. Ook ontvangen? Schrijf je hier rechts gratis in.

Ik spam nooit en gebruik het mailadres
alleen voor deze nieuwsbrief.

Copyright © 2026 Jan Scheele

Ook elke week een shot deeptech inspiratie?

Meld je aan om elk weekend een gratis shot inspiratie te ontvangen in de mailbox.

Ik spam nooit en gebruik het mailadres
alleen voor deze nieuwsbrief.

Paid Search Marketing
Search Engine Optimization
Email Marketing
Conversion Rate Optimization
Social Media Marketing
Google Shopping
Influencer Marketing
Amazon Shopping
Explore all solutions