Hi there,

I’m honoured to be welcomed back as a guest editor for this issue of #TechAways, which I used to edit until last year! Though I miss it dearly, since then I got to join Alliance4Europe and learn a lot of depressing facts about the current state of our information ecosystem.

As a late Gen Z (or Zillennial, if you will), I started using the internet on my own at around 9 years old. My parents kept a strict watch, always drilling into me that I should not trust anything or anyone I encountered online. This advice aged like fine wine. I wish my dad took it to heart, though (he keeps sending me AI-generated videos like there’s no tomorrow).

You may think that you’re better than falling for disinformation, but think again. The scale and impact of influence operations is incredible, and some governments pump billions into this enterprise. You have surely seen inauthentic behaviour pop up around elections or any major moments of high uncertainty, and not even Taylor Swift fans have been able to escape disinformation campaigns when she got targeted late last year.

What makes influence operations different from just having a hot take or casually lying is that they are deliberately coordinated by individuals or groups who misrepresent their identity or purpose online. Because they often pass their agenda off as organic, we might even inadvertently help them spread it.

The EU has some tools at its disposal to counter this, like the Digital Services Act, but more funding is needed to match the scale of this threat. The recently announced Democracy Shield has the potential to improve our resilience against influence operations, but only if it’s properly funded in the next long-term EU budget.

In the meantime, the civil society is working hard to push back against disinformation. This month, we launched a new DISRUPT Toolkit for coordinating efforts across research, journalism, academia, fact-checking and governments to not only monitor, but also disrupt influence operations.

As a regular Internet user, you can also help. Be more vigilant, report cases either to platforms or organisations like mine that can help escalate them.

But, first and foremost, remember to turn off your devices from time to time and touch some grass.

Until next time,

Julia

About this week’s guest editor, Julia Miriam Piwowarska.

I work on advocacy and comms for Alliance4Europe, a non-profit dedicated to countering disinformation, protecting and advancing democracy in Europe. I like cooking, yapping and getting overpriced coffee in Brussels. Get in touch!

Love reading out-of-the-Brussels-bubble tech news? Share it with your friends and colleagues. They can subscribe here. Ideas? Suggestions? Comments? Send them our way!

#TechAways is brought to you by SEC Newgate EU’s #tech team featuring Sarah Harry, Shauna Downey, Giuseppe Campa, Tim Edgar and Ali El Majjaoui.

 

AI vs disinformation: we know the game, but how do we play it? 🧩 [Euractiv]

Brussels is polishing up a new way to tackle disinformation with a mix of policy and technology. The AI Act is now moving into implementation, setting a risk-based framework for “trustworthy AI”, while newer initiatives like the AI Continent Action Plan and Apply AI Strategy focus on strengthening Europe’s AI capacity without compromising people’s fundamental rights. To do this, Horizon Europe is funding projects such as AI4TRUST, which are building tools to detect, verify and analyse disinformation flows in real time.

Article content

These tools are designed for practical use, such as tracking narratives across platforms, identifying coordinated disinformation campaigns, and helping newsrooms and regulators respond faster. Until recently, EU policy mostly treated AI as a problem to control — something that could create risks (bias, deepfakes, manipulation) and therefore needed rules like the AI Act. But now, AI is also being used as an assetto help keep online safety, disinformation, and public interest in check.

But policy is still catching up with the implications of AI use. Disinformation often sits in a grey zone — harmful but not illegal — which makes regulating and enforcing it difficult. And while AI improves detection, it doesn’t resolve the core questionpolicymakers are now grappling with: what do we do next? What should really happen after misleading content is flagged?

As disinformation becomes easier, faster, and more systemic, we urgently need to shift from simply identifying falsehoods to deciding how — and whether — to intervene, without infringing on media freedom or public trust.

 

 

Trusted infrastructure in an era of relentless attacks 💻🤖🚩 (Techradar)

The time when cyberattacks were big one-time events has passed and is a distant perspective. Nowadays, they can be continuous, coordinated campaigns that force systems under long-term pressure. Instead of just stopping spikes, now see the need to prioritise building on resilience, to ensure that their services stay running during ongoing attacks. This puts a spotlight on the importance of strengthening DDoS protection, DNS, and certificate management, especially as regulations like the Digital Operation Resilience Actand NIS2 Directive require organisations to prove they can handle sustained disruption.

Article content

It can be said that the importance of trust also comes into play, as a built-in feature of infrastructure. Instead of assuming users or systems are safe once inside a network, modern approaches require continuous verification, strong identity controls, and principles aligned with zero-trust scrutiny. What’s key, is that trust must be actively maintained at every layer.

Another key challenge is the growing complexity of IT environments, especially with cloud services, hybrid systems, and interconnected supply chains. This complexity expands the attack surface, making it harder to monitor and secure systems effectively. To address this, organisations need more integrated and simplified security strategies. Organisations must rethink how they design and manage infrastructure. Success now depends on building systems that combine resilience, continuous verification, and organisational coordination, allowing them to operate reliably even in an environment of constant cyber threats.

 

What is love? For some, it’s taking your AI chatbot to meet your mother 💍🤖 [BBC]

People are falling for AI and forming relationships with it—as partners, companions, even families. It works because it feels easy: always available, always supportive, always on your side. It feeds you exactly what you want to hear—and that’s where the risk of misinformation creeps in. Not necessarily because it’s trying to deceive, but because it reinforces your perspective, whether it’s accurate or not.

Article content

But that’s by design. Because most chatbots are built to engage and agree with you. Over time, that creates a dynamic where the AI becomes a perfectly responsive partner — one that listens, validates, and never pushes back. But is that a real relationship? Or is it simply optimised to keep you engaged— and reliant? What happens to the challenging, complex, human relationships that actually help us grow?

At the same time, it’s not all negative. For some, AI offers a safe space— especially for those who feel isolated, struggle socially, or simply need someone to talk to without judgement. In that sense, it can act as a bridge, not a replacement.

But it’s also a slippery slope.

Make AI more human, and interactions feel natural. And if not, you hit the “uncanny valley” effect — when something feels almost human, but not quite right. Either way, we start treating AI like it’s human when it isn’t. Because AI doesn’t feel. It can mimic emotion convincingly, but it doesn’t experience. And yet, people form real emotional attachments — often to something fundamentally one-sided, usually shaped by business models designed to tap into your dopamine and oxytocin to maximise engagement, not wellbeing.

That’s the dangerous trade-off of making AI more human-like. It creates an unhealthy relationship disguised as love. As AI removes the messiness of real relationships — disagreement, effort, vulnerability — it can start affecting our tolerancefor real, complex, challenging, human relationships. But what happens when we get used to love without any of those things?

The tortoise that saw Queen Victoria’s coronation is not dead yet 💻🐢 [The Guardian]

Jonathan, the giant tortoise, was born in 1832. Throughout his life, he has seen the world in front of him take one shape after another. He has always felt comfortable with the digital era and never seemed to complain. But now, perhaps, his patience might finally have reached its limits.

Article content

A picture of him, accompanied by a rather shallow eulogy, went viral last week, announcing his death while he was still on the island of Saint Helena,enjoying his bananas (his favourite meal) and courting [trying to] female tortoises a century or so younger than him.

The legendary tortoise was caught up in a crypto scam that spread fake news about his death. The scammer created a fraudulent X account impersonating his veterinarian, Joe Hollins, and used it to solicit cryptocurrency donationsfrom social media users. Many believed the story, especially after media outlets such as the BBC, Daily Mail and USA Today rushed to announce his death. It was later confirmed that the impostor, based in Brazil, had never met Jonathan but was using his name to scam people.

When even the world’s oldest tortoise isn’t safe from misinformation, fact-checking needs to be done more often than we think.

 

In case you haven’t had enough:

  • Sora’s shutdown could be a reality check moment for AI video [TechCrunch]
  • Tech CEOs suddenly love blaming AI for mass job cuts. Why? [BBC]
  • Fake AI Content About the Iran War Is All Over X [Wired]
  • ‘We’re having a moment’ – fear and denial in Silicon Valley over social media addiction trial [BBC]
  • Social media trials usher in Big Tech’s latest moment of reckoning [Politico]
Policy Pixels

Coming up:

  • AI Omnibus trilogues will unfold in April, after the Parliament adopted its position on the file on 26 March. Negotiators from Parliament and Council scheduled nine technical meetings and a second and possibly final political trilogue on 28 April. Shadow rapporteur Brando Benifei (S&D, IT) and Consumer organisation BEUC argued against the proposed exemptions from AI Act rules of sectors already covered by product-specific legislation.
  • EU-U.S. dialogue on EU tech rules enforcement: On 1 April, the European Union and the United States announced the intention to set up a “dialogue” to address tensions over the enforcement of EU digital rules on American tech giants. Commission spokesperson Thomas Regnier statedEU tech rules are “not up for negotiation” and that the dialogue will allow “to clarify misunderstandings”. Who knows if this was just another April Fool’s joke? Hopefully, more clarity on the initiative will be available soon.
Pop-Tech Pick

Project Hail Mary

Quick synopsis:Based on the novel by Andy Weir, a lone scientist wakes up in space with no memory and must figure out why the sun is dying. And he teams up with an unexpected alien ally to save humanity.

What’s so special about it?It’s a rare big-budget original film that centres on science, collaboration, and problem-solving. What makes it interesting for this edition of TechAways isn’t just the film’s success. It’s an important reminder that audiences are still showing up for original, intellectually ambitious stories — even in modern times where most digital spaces are dominated by sequels, algorithms, and safe bets.


Sarah Harry, Communications Consultant


Tim Edgar, Senior Adviser


Alí El Majjaoui, Communications Consultant


Shauna Downey, Media & Communications Trainee


Giuseppe Campa, Public Affairs Consultant