Top Stories Daily

The latest thought-provoking Fediverse stories

There is no better way to demonstrate how Murmel works than give you a taste of it right away. This page aggregates the most widely shared news and articles from a broad range of people across the Fediverse. You can get those in your favorite RSS reader too. Want the news and stories that matter to you personally? Sign up and enjoy a fully-tailored experience free for 30 days.
Worth reading
Shared by @ctietze and 222 others.
Corey Snipes 🌱 (@coreysnipes) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Paul Turnbull 🇨🇦 (@Chigaze) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

MisterArix (@MisterArix) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Else, Someone (@nobody) · Feb 22
🔁 @yogthos:

It’s a thousand years of the English language, compressed into a single blog post. Read it and notice where you start to struggle. Notice where you give up entirely.

deadlanguagesociety.com/p/how-

#language #english

almondtree (@almondtree) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

aprilfollies (@aprilfollies) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Fink :antifa: (@fink) · Feb 23
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Christian Tietze (@ctietze) · Feb 23
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Alexander K‮kn‭li (@alech) · Feb 23
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Joe Groff󠄱󠄾󠅄󠄸󠅂󠄿󠅀󠄹󠄳󠅏 (@joe) · Feb 22
🔁 @Natasha_Jay:

How far back in time can you understand English?

It’s a thousand years of the English language, compressed into a single blog post.

"... as his post goes on, his language gets older. A hundred years older with each jump. The spelling changes. The grammar changes. Words you know are replaced by unfamiliar words, and his attitude gets older too, as the blogger’s voice is replaced by that of a Georgian diarist, an Elizabethan pamphleteer, a medieval chronicler."

deadlanguagesociety.com/p/how-

#english #language

Worth reading

Tired of dystopian sci-fi? You might like Solarpunk.

motherjones.com · Feb 19

A recent literary genre imagines what happens when our climate changes—and so do we.

Shared by @hypebot and 32 others.
Eye (@grb090423) · Feb 23
🔁 @dilmandila:

Sometimes I think my #solarpunk ideas are very original. Decentralized countries. 3D printers manufacturing anything. Biophotovaltic panels that grow in backyards to power homes. And then I read an article like this and I remember that a lot of other writers like me are reacting to our present dilemmas and troubles, and we probably grew up on the same kind of scifi stories so we most likely end up with the same solutions. See #books to read inside.

motherjones.com/environment/20
#bookstodon

Dilman Dila (@dilmandila) · Feb 23

Sometimes I think my #solarpunk ideas are very original. Decentralized countries. 3D printers manufacturing anything. Biophotovaltic panels that grow in backyards to power homes. And then I read an article like this and I remember that a lot of other writers like me are reacting to our present dilemmas and troubles, and we probably grew up on the same kind of scifi stories so we most likely end up with the same solutions. See #books to read inside.

motherjones.com/environment/20
#bookstodon

Ben Royce 🇺🇦 🇸🇩 (@benroyce) · Feb 23
🔁 @dilmandila:

Sometimes I think my #solarpunk ideas are very original. Decentralized countries. 3D printers manufacturing anything. Biophotovaltic panels that grow in backyards to power homes. And then I read an article like this and I remember that a lot of other writers like me are reacting to our present dilemmas and troubles, and we probably grew up on the same kind of scifi stories so we most likely end up with the same solutions. See #books to read inside.

motherjones.com/environment/20
#bookstodon

Trendy Toots (@trendytoots) · Feb 23
🔁 @dilmandila:

Sometimes I think my #solarpunk ideas are very original. Decentralized countries. 3D printers manufacturing anything. Biophotovaltic panels that grow in backyards to power homes. And then I read an article like this and I remember that a lot of other writers like me are reacting to our present dilemmas and troubles, and we probably grew up on the same kind of scifi stories so we most likely end up with the same solutions. See #books to read inside.

motherjones.com/environment/20
#bookstodon

Shared by @joshua and 17 others.

The world’s 280 million electric bikes and mopeds are cutting demand for oil far more than electric cars

theconversation.com · Feb 22

Electric vehicles get all the press – but it’s the smaller unsung two wheelers cutting oil demand the most.

Shared by @meganL and 62 others.
Joshua Byrd (@phocks) · Feb 23
🔁 @cdarwin:

You might think switching to an electric vehicle is the best step.
In fact, for short trips, an electric bike or moped might be better for you, and for the planet.
That’s because these forms of transport – collectively known as electric micromobility – are cheaper to buy and run.
But it’s more than that – they are actually 💥displacing four times as much demand for oil as all the world’s electric cars at present,
due to their staggering uptake in China and other nations where mopeds are a common form of transport.

theconversation.com/the-worlds

Paul_IPv6 (@paul_ipv6) · Feb 22
🔁 @cdarwin:

You might think switching to an electric vehicle is the best step.
In fact, for short trips, an electric bike or moped might be better for you, and for the planet.
That’s because these forms of transport – collectively known as electric micromobility – are cheaper to buy and run.
But it’s more than that – they are actually 💥displacing four times as much demand for oil as all the world’s electric cars at present,
due to their staggering uptake in China and other nations where mopeds are a common form of transport.

theconversation.com/the-worlds

stib (@stib) · Feb 22
🔁 @spillanemike:

Things you like to see

The world’s 280 million electric bikes and mopeds are cutting demand for oil far more than electric cars

theconversation.com/the-worlds

Kris (@isotopp) · Feb 22
🔁 @spillanemike:

Things you like to see

The world’s 280 million electric bikes and mopeds are cutting demand for oil far more than electric cars

theconversation.com/the-worlds

Plant species near extinction mysteriously rebounded and is now thriving after a solar power project was installed

earth.com · Feb 22

A plant called threecorner milkvetch, nearly extinct, has grown eightfold thanks to solar panels, surprising scientists and conservationists.

Shared by @3TomatoesShort and 30 others.
Debbie Goldsmith 🏳️‍⚧️♾️🇺🇦 (@dgoldsmith) · Feb 22
🔁 @ar387:

Plant species near extinction mysteriously rebounded and is now thriving after a solar power project was installed

For all those that say solar PV destroys the environment, might need to rethink, but they won't as are blinkered (and like the traditional polluting energy sources that are running out).

#solar #solarpv #botany #habitat

earth.com/news/plant-species-t

Levka (@LevZadov) · Feb 22

#extinction #solar

"Plant species near extinction mysteriously rebounded and is now thriving after a solar power project was installed

A rare desert plant in Nevada, called threecorner milkvetch, increased from just 12 known plants to 93 after a large solar power project was built nearby.

Rather than clearing everything away, the project was designed in a way that allowed the plant not only to survive, but to grow in greater numbers than before."

earth.com/news/plant-species-t

Fix Your Hearts or Die

the-reframe.com · Feb 22

It's a invitation, not a threat. The path to liberation for lonely men is feminism.

Shared by @otfrom and 32 others.
icy (@otterly_icy) · Feb 22
🔁 @markmetz:

AR Moxon, nailing it again…
“This is the discourse about what is commonly called the male loneliness epidemic, which is a problem, usually one that is presented as something for the rest of us to solve on behalf of lonely men. If we don't solve it, we're usually warned, we will be at fault for whatever these men do next, in retaliation for not having their problem solved.
There's apparently nothing the lonely men themselves can do, I've noticed. They've apparently tried everything already. It's up to us.”
the-reframe.com/fix-your-heart

Odd reverberations (@radicalfaery) · Feb 22
🔁 @markmetz:

AR Moxon, nailing it again…
“This is the discourse about what is commonly called the male loneliness epidemic, which is a problem, usually one that is presented as something for the rest of us to solve on behalf of lonely men. If we don't solve it, we're usually warned, we will be at fault for whatever these men do next, in retaliation for not having their problem solved.
There's apparently nothing the lonely men themselves can do, I've noticed. They've apparently tried everything already. It's up to us.”
the-reframe.com/fix-your-heart

kittyface83 (@kittyface83) · Feb 23
🔁 @JuliusGoat:

Today I wrote about the Male Loneliness Epidemic, and the ways that a cult(ure) of abuse leads so many to seek paths of healing solely on behalf of abusers, and doing so not by building paths of universal liberation, but by repairing paths of domination.
the-reframe.com/fix-your-heart

peelinggecko (@peelinggecko) · Feb 22
🔁 @markmetz:

AR Moxon, nailing it again…
“This is the discourse about what is commonly called the male loneliness epidemic, which is a problem, usually one that is presented as something for the rest of us to solve on behalf of lonely men. If we don't solve it, we're usually warned, we will be at fault for whatever these men do next, in retaliation for not having their problem solved.
There's apparently nothing the lonely men themselves can do, I've noticed. They've apparently tried everything already. It's up to us.”
the-reframe.com/fix-your-heart

Femme Malheureuse (@femme_mal) · Feb 22
🔁 @JuliusGoat:

Today I wrote about the Male Loneliness Epidemic, and the ways that a cult(ure) of abuse leads so many to seek paths of healing solely on behalf of abusers, and doing so not by building paths of universal liberation, but by repairing paths of domination.
the-reframe.com/fix-your-heart

LJ (@LJ) · Feb 23
🔁 @JuliusGoat:

Today I wrote about the Male Loneliness Epidemic, and the ways that a cult(ure) of abuse leads so many to seek paths of healing solely on behalf of abusers, and doing so not by building paths of universal liberation, but by repairing paths of domination.
the-reframe.com/fix-your-heart

The Flight Attendant (@CosmicTraveler) · Feb 22
🔁 @JuliusGoat:

Today I wrote about the Male Loneliness Epidemic, and the ways that a cult(ure) of abuse leads so many to seek paths of healing solely on behalf of abusers, and doing so not by building paths of universal liberation, but by repairing paths of domination.
the-reframe.com/fix-your-heart

Jeff (@Porkwich) · Feb 23
🔁 @JuliusGoat:

Today I wrote about the Male Loneliness Epidemic, and the ways that a cult(ure) of abuse leads so many to seek paths of healing solely on behalf of abusers, and doing so not by building paths of universal liberation, but by repairing paths of domination.
the-reframe.com/fix-your-heart

Worth reading

They Were Convicted of Killing Their Abusers. A New Law Offered a Second Chance at Freedom.

propublica.org · Feb 22

An Oklahoma law was supposed to help reduce the sentences of women who killed their abusers. Why are nearly all of them still in prison?

Shared by @peterjsefton and 18 others.
The Flight Attendant (@CosmicTraveler) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Compassionate Crab (@Compassionatecrab) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Anna Anthro (@AnnaAnthro) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Bongolian (@Bongolian) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

almondtree (@almondtree) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Steve Thompson PhD (@SteveThompson) · Feb 22

The Victims Who Fought Back

propublica.org/article/oklahom

An Oklahoma law was supposed to help reduce the sentences of women who killed their abusers. Why are nearly all of them still in prison?

valOrie_p'O (@valOrie) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Peter Sefton 🇪🇺🇺🇦 (@peterjsefton) · Feb 22
🔁 @ProPublica:

NEW: The Victims Who Fought Back

An #Oklahoma law was supposed to help reduce the sentences of women who killed their abusers.

Why are nearly all of them still in prison?

propublica.org/article/oklahom

#news #crime #criminaljustice #domesticviolence #law #justice #women #courts

Shared by @heybran and 16 others.
Brandon Zhang 🇨🇳 (@heybran) · Feb 23
🔁 @geffrey:

What I see is the result of my own choices rather than a system trying to capture and monetise my attention.

Such a great articulation. Feel the exact same way, @susam

https://susam.net/attention-media-vs-social-networks.html

Worth reading

How close are we to a vision for 2010?

shkspr.mobi · Feb 22

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Comp...

Shared by @tristanf and 7 others.
Terence Eden (@Edent) · Feb 23
🔁 @blog:

How close are we to a vision for 2010?

shkspr.mobi/blog/2026/02/how-c

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.

The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.

Scenario 1: ‘Maria’ – Road Warrior (close-term future)

Our titular heroine steps off a long haul flight into a foreign country.

she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.

Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.

she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.

We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.

Maria heads to her rented car:

The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.

Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.

The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.

Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains

Ah! The dream of personal agents. Not even close.

In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.

Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.

Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.

Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.

Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.

Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.

We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!

Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).

I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.

She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes

Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!

As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages

Do-Not-Disturb is a feature on every modern phone.

Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.

Ah! The constant chastising FitBit!

Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)

Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.

Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.

Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.

Dimitrios receives calls which are:

answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,

Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.

a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.

She's going to leave him.

Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.

This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?

Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload

We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?

Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!

A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!

While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.

ChatRoulette for kids! What could possibly go wrong!

Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.

Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)

Carmen is a modern, 21st century woman. Let's see how technology helps her:

She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.

Voice commands work - although usually only if you know the correct invocation.

AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.

The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.

Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.

She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.

Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.

Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.

All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street

Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?

Would you bother using a public terminal?

When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.

I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?

Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.

Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order

Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.

On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.

Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.

the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)

Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.

Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment

This is pretty much how the Amazon Locker works!

Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)

Let's now go to an environmental study group meeting at a learning space.

Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).

Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.

Here's Annette:

Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.

A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.

Here's Solomon, a new participant:

The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.

Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!

In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.

Nope!

During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.

Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.

During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.

I feel we're still about 25 years away from this future!

Key technological requirements for Ambient Intelligence (AmI)

The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?

The researches suggest five technological requirements:

  1. Very unobtrusive hardware
  2. A seamless mobile/fixed communications infrastructure
  3. Dynamic and massively distributed device networks
  4. Natural feeling human interfaces
  5. Dependability and security

I think they're bang on the money there.

Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.

While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.

Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.

Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.

Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.

What Have We Learned

The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.

Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.

We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.

What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.

Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.

#AI #future #predictions
Terence Eden (@edent.tel) · Feb 22
🔁 @edent.tel:

New blog post: How close are we to a vision for 2010? https://shkspr.mobi/blog/2026/02/how-close-are-we-to-a-vision-for-2010/ Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with …

How close are we to a vision f...

SamLR/Cass (@SamLR) · Feb 22
🔁 @Edent:

🆕 blog! “How close are we to a vision for 2010?”

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a…

👀 Read more: shkspr.mobi/blog/2026/02/how-c

#AI #future #predictions

Aslak Raanes (@aslakr) · Feb 22
🔁 @blog:

How close are we to a vision for 2010?

shkspr.mobi/blog/2026/02/how-c

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.

The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.

Scenario 1: ‘Maria’ – Road Warrior (close-term future)

Our titular heroine steps off a long haul flight into a foreign country.

she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.

Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.

she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.

We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.

Maria heads to her rented car:

The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.

Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.

The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.

Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains

Ah! The dream of personal agents. Not even close.

In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.

Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.

Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.

Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.

Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.

Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.

We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!

Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).

I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.

She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes

Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!

As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages

Do-Not-Disturb is a feature on every modern phone.

Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.

Ah! The constant chastising FitBit!

Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)

Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.

Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.

Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.

Dimitrios receives calls which are:

answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,

Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.

a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.

She's going to leave him.

Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.

This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?

Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload

We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?

Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!

A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!

While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.

ChatRoulette for kids! What could possibly go wrong!

Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.

Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)

Carmen is a modern, 21st century woman. Let's see how technology helps her:

She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.

Voice commands work - although usually only if you know the correct invocation.

AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.

The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.

Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.

She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.

Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.

Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.

All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street

Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?

Would you bother using a public terminal?

When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.

I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?

Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.

Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order

Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.

On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.

Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.

the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)

Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.

Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment

This is pretty much how the Amazon Locker works!

Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)

Let's now go to an environmental study group meeting at a learning space.

Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).

Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.

Here's Annette:

Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.

A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.

Here's Solomon, a new participant:

The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.

Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!

In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.

Nope!

During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.

Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.

During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.

I feel we're still about 25 years away from this future!

Key technological requirements for Ambient Intelligence (AmI)

The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?

The researches suggest five technological requirements:

  1. Very unobtrusive hardware
  2. A seamless mobile/fixed communications infrastructure
  3. Dynamic and massively distributed device networks
  4. Natural feeling human interfaces
  5. Dependability and security

I think they're bang on the money there.

Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.

While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.

Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.

Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.

Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.

What Have We Learned

The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.

Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.

We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.

What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.

Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.

#AI #future #predictions
Terence Eden’s Blog (@blog) · Feb 22

How close are we to a vision for 2010?

shkspr.mobi/blog/2026/02/how-c

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.

The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.

Scenario 1: ‘Maria’ – Road Warrior (close-term future)

Our titular heroine steps off a long haul flight into a foreign country.

she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.

Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.

she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.

We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.

Maria heads to her rented car:

The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.

Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.

The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.

Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains

Ah! The dream of personal agents. Not even close.

In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.

Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.

Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.

Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.

Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.

Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.

We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!

Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).

I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.

She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes

Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!

As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages

Do-Not-Disturb is a feature on every modern phone.

Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.

Ah! The constant chastising FitBit!

Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)

Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.

Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.

Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.

Dimitrios receives calls which are:

answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,

Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.

a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.

She's going to leave him.

Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.

This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?

Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload

We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?

Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!

A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!

While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.

ChatRoulette for kids! What could possibly go wrong!

Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.

Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)

Carmen is a modern, 21st century woman. Let's see how technology helps her:

She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.

Voice commands work - although usually only if you know the correct invocation.

AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.

The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.

Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.

She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.

Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.

Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.

All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street

Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?

Would you bother using a public terminal?

When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.

I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?

Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.

Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order

Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.

On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.

Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.

the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)

Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.

Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment

This is pretty much how the Amazon Locker works!

Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)

Let's now go to an environmental study group meeting at a learning space.

Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).

Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.

Here's Annette:

Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.

A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.

Here's Solomon, a new participant:

The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.

Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!

In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.

Nope!

During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.

Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.

During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.

I feel we're still about 25 years away from this future!

Key technological requirements for Ambient Intelligence (AmI)

The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?

The researches suggest five technological requirements:

  1. Very unobtrusive hardware
  2. A seamless mobile/fixed communications infrastructure
  3. Dynamic and massively distributed device networks
  4. Natural feeling human interfaces
  5. Dependability and security

I think they're bang on the money there.

Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.

While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.

Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.

Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.

Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.

What Have We Learned

The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.

Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.

We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.

What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.

Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.

#AI #future #predictions
Tristan Ferne (@tristanf) · Feb 23
🔁 @blog:

How close are we to a vision for 2010?

shkspr.mobi/blog/2026/02/how-c

Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.

The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.

Scenario 1: ‘Maria’ – Road Warrior (close-term future)

Our titular heroine steps off a long haul flight into a foreign country.

she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.

Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.

she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.

We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.

Maria heads to her rented car:

The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.

Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.

The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.

Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains

Ah! The dream of personal agents. Not even close.

In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.

Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.

Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.

Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.

Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.

Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.

We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!

Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).

I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.

She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes

Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!

As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages

Do-Not-Disturb is a feature on every modern phone.

Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.

Ah! The constant chastising FitBit!

Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)

Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.

Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.

Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.

Dimitrios receives calls which are:

answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,

Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.

a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.

She's going to leave him.

Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.

This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?

Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload

We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?

Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!

A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!

While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.

ChatRoulette for kids! What could possibly go wrong!

Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.

Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)

Carmen is a modern, 21st century woman. Let's see how technology helps her:

She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.

Voice commands work - although usually only if you know the correct invocation.

AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.

The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.

Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.

She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.

Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.

Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.

All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street

Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?

Would you bother using a public terminal?

When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.

I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?

Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.

Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order

Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.

On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.

Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.

the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)

Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.

Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment

This is pretty much how the Amazon Locker works!

Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)

Let's now go to an environmental study group meeting at a learning space.

Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).

Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.

Here's Annette:

Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.

A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.

Here's Solomon, a new participant:

The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.

Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!

In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.

Nope!

During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.

Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.

During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.

I feel we're still about 25 years away from this future!

Key technological requirements for Ambient Intelligence (AmI)

The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?

The researches suggest five technological requirements:

  1. Very unobtrusive hardware
  2. A seamless mobile/fixed communications infrastructure
  3. Dynamic and massively distributed device networks
  4. Natural feeling human interfaces
  5. Dependability and security

I think they're bang on the money there.

Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.

While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.

Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.

Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.

Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.

What Have We Learned

The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.

Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.

We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.

What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.

Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.

#AI #future #predictions
There are no more posts at this time, but we are constantly looking for new ones.

No tracking. No profiling. No model training.

© 2026 IN2 Digital Innovations GmbH . All rights reserved.