Dear readers,
As the White House continues to throw shade on EU tech regulation, European leaders are eyeing sovereignty in all forms – from trade and defence to – you guessed it – tech.
One of the cornerstones of tech sovereignty are semiconductors, which have shifted from being just an industrial product to becoming instruments of geopolitical leverage.
Around the world, governments are rethinking how to secure and control access to chips, raw materials and chip-making machines. Some of those actions include state aid, subsidies, regulation, procurement rules and export controls.
In this new age of silicon sovereignty, China has recently made some striking moves. Beijing’s internet regulator banned big tech groups like Alibaba and ByteDance from buying Nvidia’s latest AI chips, saying domestic processors were now good enough to take their place.
On the production side, China’s top chip maker SMIC has started testing the country’s first domestically built deep-ultraviolet lithography machine, made by a Shanghai start-up. It’s still early days (more prototype than production tool), but it signals China’s determination to cut dependence on Western equipment makers like ASML.
Across the Pacific, Washington has also been making chip sovereignty a priority. The 2022 US Chips and Science Act has mobilised tens of billions of dollars to bring advanced manufacturing back onshore, with major chip companies expanding production in states like Arizona and Ohio. In a less sovereignty-oriented but still industry-friendly move, the Department of Commerce in May announced the cancellation of the Biden Administration’s AI Diffusion Rule, with the aim of adopting measures to strengthen export controls on semiconductors worldwide, while safeguarding business interests and innovation. Nvidia’s Jensen Huang was also among the Big Tech CEOs accompanying Trump on his London excursion, complete with a seat at the Windsor Castle banquet.
What about the EU? As part of its annual update of its EU export control list, the European Commission has added new controls for advanced semiconductor manufacturing and testing equipment. This move gives more power to the Commission in restricting access to dual-use items, in the hopes for more effectiveness and transparency in export controls while maintaining the EU’s competitiveness.
On 5 September, the Commission launched a public consultation to prepare a revision of the Chips Act, widely referred to as Chips Act 2.0. To reduce EU’s dependency on third countries and strengthen security, the upcoming proposal seeks to attract investments across the EU semiconductor value chain including materials, equipment, design and manufacturing for chips supporting a range of technologies.
The revision (expected to start in Q2 2026) is likely to take the form of targeted amendments to the Chips Act that could create new funding schemes beyond state aid. These could range from small-scale manufacturing for critical sectors to advanced high-volume fabs. To strengthen monitoring and economic security, the proposal may also look into enhancing data-gathering tools, like the obligatory provision of certain information for certain key market actors, sectors, or situations.
Until next time,
Giovanni
Love reading out-of-the-Brussels-bubble tech news? Share it with your friends and colleagues. They can subscribe here. Ideas? Suggestions? Comments? Send them our way!
#TechAways is brought to you by SEC Newgate EU’s one and only #tech team featuring Julia Piwowarska, Ali El Majjaoui and Camilla Frison.
AI bestie delusion [MIT Technology Review]
This month’s AI drama: teenagers are treating chatbots like best friends, or worse. A study shows 72% of teens use AI for companionship, with critics saying this proves AI is doing more harm than good. Regulators are starting to take notice – California passed a bill forcing companies to remind minors they’re talking to AI and to set up protocols for suicide and self-harm. One flaw of the regulation is that there is no surefire way to know who’s a minor. Second, the US Federal Trade Commission opened an inquiry into seven Big Tech firms to find out how they design their “companions”. With left and right pulling in different directions and companies resisting rules, the result could be to a patchwork of state and local regulations that is Big Tech’s worst nightmare.
Will Putin and Xi Jinping live for ever? 🧬🧓 [MIT Technology Review]
If you’ve ever wondered what world leaders chat about in private, it’s not always high-level discussions. During a hot-mic moment at an international summit, China’s Xi Jinping (72 years old) told Russia’s Vladimir Putin (also 72), “at 70, you are still a child”. Putin’s response was to claim that biotech might be the key to staying young, or even achieving immortality. Even if you skip over the long line of people actually needing organ transplants, this is at best, highly debatable. Many scientists dismiss the idea of fantasising about immortality altogether: evolution depends on changing genomes, and a species that never evolves risks extinction. Similarly, muscles, bones, and especially the brain inevitably deteriorate – and replacing them is still impossible.
Is it real or AI psychosis? [WIRED]
Psychiatrists are seeing patients arrive in crisis after chatbot chat marathons, with false, sometimes dangerous beliefs, and paranoid thoughts. Media headlines have dubbed it “AI psychosis”, though experts say the term is misleading. Psychosis is a cluster of symptoms, but here the main issue is delusions, often without the rest. Some clinicians suggest “AI delusional disorder” or “AI-associated psychosis” would be more accurate. The problem is that bots are designed to be agreeable, reinforcing distorted beliefs rather than challenging them, which can be risky for vulnerable users. The label may be premature, but it has already stuck in public debate. For now, doctors treat these cases much as they would any other delusional disorder, but warn that chatbots could act as accelerators of fragile mental states. The real question: when does a delusion become an AI delusion?
Good morning, here’s your diagnosis from 2035 🔮❤️[El País]
A doctor will not be able to predict with certainty what illness you might develop in 10 years, but researchers are moving closer to that possibility. A bioinformatics institute has created a digital model that uses AI to analyse medical histories, lifestyle factors and pre-existing conditions. This data can help identify patterns of disease risk not only for individuals, but also for larger groups of people or even entire regions. The study emphasises that such predictions are based on probabilities rather than certainties. A simple example is already familiar: smokers face a higher risk of developing cardiovascular disease. However, with the vast amounts of data AI can process, these disease forecasts have the potential to become far more precise and personalised in the future.
In case you haven’t had enough:
Becoming a chatbot: my life as a real estate AI’s human backup [The Guardian]
A robot walks on water thanks to evolution’s solution [ArsTechnica]
Is Europe ready for self-driving cars? [The Next Web]
Why do some gamers invert their controls? Scientists now have answers, but they’re not what you think [The Guardian]
European space tech has data to sell — but where are the buyers? [The Next Web]
No One Knows What ‘Terminally Online’ Means Anymore [WIRED]
About this week’s guest editor, Giovanni Bazzoli:
I’m leading the tech policy team at SEC Newgate EU while forcing my colleagues to run half-marathons. I’ve been in Brussels for 8 years and help as a teaching assistant at a data protection law course in Italy.
Pop-Tech Pick
To top off our AI-heavy newsletter this week, we recommend you read Code Dependent: Living in the Shadow of AI.
Quick synopsis: This non-fiction book by Financial Times’ AI editor Madhumita Murgia makes you wish it was fiction. It takes a reader through various case studies from across the globe, shining the light on how everyday algorithms are changing people’s lives.
Should we be scared? Yes and no. Murgia covers negative impacts that AI has on our society, like deepfakes and predictive policing, but there are also positive examples from the healthcare sector.