If you have kids, and if they are of the age where they have a phone, a tablet, a computer, or a video game console (or possibly all four of them), you may be — as a parent — scared. Like, very scared. Heck, even if you don’t have children, you might be scared by the idea of AI-dependent kids on the loose.
And, well, this fear might not be unfounded — screens could prevent children from creating social bonds or from fostering the kind of creativity that practically defines childhood. After all, the idea that boredom is sort of a prerequisite for creativity is pretty well-cited (here’s just one study, which is more specifically about boredom at work and not in children, but same idea).
“Boredom might spark creativity because a restless mind hungers for stimulation. Maybe traversing an expanse of tedium creates a sort of cognitive forward motion.”
Wired, How Being Bored Out of Your Mind Makes You More Creative
And that’s only internet and screens! What will happen with AI ?
More than 20% of households already have at least one smart speaker, and questions are beginning to arise. Like, should children be forced to speak politely to AI assistants? And how much should the AI assistants enforce this behavior? Even if we believe in the greatness of progress, we, as parents, can nonetheless be stressed out by the risk for our AI messing up with our kids
It’s like there’s a competition around between our kids’ brains and AI: the prize is time and attention. Kids’ brains need (bored) time to grow. AI needs attention, because at some point, that’s how their creator gets paid.
Are Things Really So Bad?
One could argue that today’s screens (and eventually whatever AI will come to mean, whether it has a traditional screen or not) are not different from the good old fashioned TV that has lived in our home for decades. From cartoons to reality shows, certainly TV has had the reputation of being debilitating.
"Dad, just one more episode. I'm finishing season three!"
So does AI really bring anything worse to the table? Maybe. After all:
- The distribution media itself is more pervasive — screens are everywhere, and their access is hard to control.
- Coming back to the question at the beginning of this section, increased consumption is explicitly encouraged. For example, modern applications come up with machine learning-powered tricks in order to bring in recommendation, content, notifications, etc.
It’s like a constant battle of good versus evil. Content platforms are getting very adept at knowing what we like, and the more we use them, the better they deliver even more of what we like (good… but to what extent?). Free games might let you win, then lose, then win, up until the tipping point where you are sufficiently engaged to rope you — or, more likely, your offspring — into in-app purchases for add-ons (free entertainment is nice, but at what cost?).
As it turns out, it’s not just something worried parents are thinking about, either. Nick Seaver from the Department of Anthropology at Tufts University wrote a paper recently about the idea of recommender systems as traps. So while immediate concerns might be with children, it’s probably actually a much larger concern for everyone, including those developing AI systems.
Designed for Addiction
The Catch-22 between time and attention brings up an important question, which is: will we at some point see applications that are designed specifically to not be addictive?
There are two important facets to this question that would determine the reality of such an application:
- The technological side: That is, is it actually possible to actually build a system that would not only optimize for a positive content outcome but that would also optimize for time spent on screen? One of the fundamental challenges here is that, for now, no one really knows what optimal screen time really means. Oxford University researchers Amy Orben and Andrew Przybylski allege that current studies correlating the impact of screen time with well-being aren’t as solid as they could be.
- The motivational aspect: Importantly, the other question is why anyone (or any company) building this kind of application would entertain the goal of a reduction of screen time if it means possibly losing traction and support? The reality is that such a movement probably will only happen through regulation, or more realistically, some ethical charter. And the movement has already begun — Sophie Beren of the University of Pennsylvania published “Look Up”: The Cell Phone Manifesto in 2018, calling for more regulation of cell phones (though we can easily see how that could be extended to AI in general), especially when it comes to children.
The other question that all of this brings about is whether or not our children’s brains will start to be wired differently from interacting with AI from a young age.
"Will AI make us lose our Jiminy Cricket? Or worse, our Peter Pan?
What makes the equation more complicated is that the long-term impact of AI on kids is probably not only related to screens and apps. There are likely plenty more subtle, long-lasting effects to living in a world where you are surrounded by AI. Of course, this is obviously very hypothetical, but:
- If AI becomes more pervasive, we will have continuous, unlimited access to a voice that will always be around to answer to our questions. So why answer your own questions? Maybe AI content at an early age will replace part of our inner voice.
- If AI empowers different kinds of real-life bots, we might have a myriads of “new species” that kids could bond with, each presenting a wide range of emotional and intelligent responses. Maybe early AI interaction will create some subtle shifts in the moral compass by making the barrier we have in terms of hurting sentient or emotional beings unclear.
Stefania Druga and Randi Williams, research assistants at the MIT Media Lab, have already started examining this very dilemma. Their research is relatively encouraging; but there is the other side, which are the many reported stories of humans harming robots (and the attempt to understand why).
“The bullying got so bad during the experiment that researchers had to program an “abuse-evading” algorithm for the robot. When the robot approached a human, it would see whether that person was under 4 feet 6 inches — kids. If there wasn’t a significant density of taller people around — i.e., adult supervision — the robot would flee toward grownups.”
From Business Insider’s Japanese researchers watch a group of children beat up their robot in a shopping mall
AI for Good
The other side of the AI-will-change-us-all-for-the-worse coin is the much more optimistic idea that maybe AI will actually help kids be smarter and more accomplished and that technology always find a way to cure its own mistakes. Couldn’t we design AI that helps kids be smarter — like, way, way smarter?
"Warm learning is better than hot chocolate"
Intuitively, the game AI plays in capturing kids’ attention could be also played the other way around. That is, AI could push their creativity, make them more curious, and help them connect or learn more deeply. We could turn AI in to an education dream.
For example, learning to read is incredibly complicated, and it can take awhile for kids to pick it up. What’s more, some kids happen to learn to read much faster than others — but why? There might be some brain composition science and variance to this, but it could also partially be that some kids just happen to trigger a particular neuron scheme that works well.
Warm learning is better than hot chocolate: in the future, we could hack this innate trick by some kids and make a system that makes learning very fast. After all, on Japan, there are already “dunce robots” that are designed to help children learn.
Learning aside, generative technologies might also create a new form of art in the long run — and not just visual art, but music and literature too. We should help our kids get used to and foster this new kind of creativity.
If machines can explore the universe of possibilities without limits, generating endless combinations, our kids should have the creative tools to explore these possibilities in a new way as well. That means new ways to write, draw, or create music that would be AI-assisted, possibly involving subtle ways to interact with an AI that you would only fully grasp as a child.
This might sound far off, but think about it — we already have dating apps that basically try to help non-flexible being (adults) start very complex relationships while operating on the statistically biased subset of the population that is, let’s face it, not great are accomplishing this task (and I was there so I know). Having AI help kids connect with other like-minded young from different cultures and backgrounds to help them find common group for cultural meddling isn’t such a leap.
AI Privilege & Perspective
There’s nothing more vain that trying to solve the problems of people that don’t really have one. At the end of the day, too much AI isn’t really a relevant problem if access to education, food, water, or physical safety is an actual issue. It’s worth mentioning that having that kind of perspective is important.
But if you are well off enough to consider AI a problem, you can still have perspective. AI might be less of something that we have to solve and more something we have to deal with — a new reality that we’re living in to which kids will simply adapt. Yes, there will be changes — but society is constantly changing. With AI, change might just have come faster (and more visibly) than you’re used to.