We’re kicking off today with some tech insights out of last week’s World Economic Forum Annual Meeting. What were my three main takeaways following a week of discussions at this year’s WEF?
1: Platforms continue their slide.
2: B2B companies stand apart and were omnipresent.
3: The future of work debate gets real.
For more of my thoughts on these three trends, check out my post One of these is not like the other….
Register for #TechAways here.
Apple takes on a surprising role as – competition enforcer? [Wall Street Journal]
After a period of crisis so long we’ve lost track, we’ve entered a new act in the Facebook saga and a surprising antagonist has emerged to play the role of regulator – Apple. After news broke that Facebook was using an internal employee platform externally and secretly paying people to run an app that lets the company monitor users’ phone and web activity, Apple decided to flex its muscles as a privacy protector. The company suspended Facebook, and now Google following similar allegations, from operating on iOS devices. While the ban seems to be lifted – for now – the line in the sand is being drawn over personal data policies between the major tech players.
How not to teach a driverless car ethics [New Yorker]
The biggest challenge to machines acting ethically? It’s humans. Although driverless car technology exists and is being slowly deployed, researchers have been grappling with how to enable AVs to make decisions in situations where human lives are at stake. To crack this puzzle, M.I.T.’s Media Lab launched a survey in 2016 to gather data on how humans would react in different situations, collecting input from over two million volunteers across 200 countries – the largest study on moral preferences for machine intelligence ever conducted. The results aren’t exactly helpful. Different geographical groups tend to have different preferences on which lives to spare. If we want to create ethical guidelines for AI, we need to agree first on our own common ethics.
Governments are in deep… fake. [Carnegie Endowment for International Peace]
From ‘good-old’ humiliation and blackmail to election meddling or political destabilisation: so-called deepfake technology poses the usual risk/opportunity conundrum. Algorithms running deepfake learn from audio-visual samples of a target and generate a synthetic video overlaying her/his movements to those of another person. In our sample-rich information age, these fakes are very hard to detect for experts. Even more so for average citizens. But, it’s not all evil: deepfakes are also helping ALS patients communicate. To avoid perverse effects like ubiquitous public doubt on every video, governments should develop techniques to spot fakes and ensure (social) media and civil society share information transparently. The EU included guidelines in its strategy to fight disinformation – but wider agreement is needed.
What jobs will robots do? [Wired]
Machines and software are getting increasingly better at doing our jobs, but we are not all impacted the same way. Research by the Brookings Institution dives in a bit deeper, showing disparities of the disruptive effects of automation. Young workers are more vulnerable to automation (49% of jobs susceptible to automation for the 16-24) than older ones (40%). So are Hispanic workers compared with white workers. Software development is 8% automatable while food preparation scores at 91%. Hopefully, this kind of mapping can be used for concrete measures to anticipate and minimize negative consequences of automation on the job market.
In case you haven’t had enough…
The state of AI in 2019 [The Verge]
Generation Z’s 7 lessons for surviving in our tech-obsessed world [Wall Street Journal]
Let users sell their data to Facebook [Bloomberg]
Has the fight over privacy changed at all in 2019? [TechCrunch]