Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

This month, artificial intelligence bots entered Santa’s Grotto. For one thing, AI-enabled gifts are on the rise – as I know myself, I’ve just been gifted with an amazing AI-dictation tool.
Meanwhile, retailers like Walmart are deploying AI tools to help frustrate shoppers during the holidays. Think of these, if you will, as the digital equivalent of a personal elf offering shopping and gifting shortcuts. And judging by recent reviews, they seem to work well.
But here’s the paradox: As AI spreads into our lives — and our Christmas stockings — hostility still sky-highs. Earlier this month, for example, a British government study Four out of ten people expect AI to deliver benefits. However, three in ten estimate that there will be significant damage due to “data security” breaches, “spread of misinformation” and “job displacement”.
This is not surprising, perhaps. The risks are real and well known. However, as we move into 2025, it is useful to reflect on three oft-overlooked points about the current anthropology of AI that help frame this paradox in a constructive way.
First, we need to rethink which “A” we use “AI” Today. Yes, machine learning systems are “artificial”. However, bots aren’t always — or often — a replacement for our human brains, a flesh-and-blood cognitive alternative. Instead, they make us work faster and move more efficiently through tasks. Marketing is just one point.
So maybe we can use AI as “augmented” or “accelerated” intelligence – or “agent” intelligence, the buzzword for why. The latest Nvidia blog It is called the “next frontier” of AI. This implies that they can act as autonomous agents, performing tasks for humans at their behest. In the year It will be a key theme in 2025. Or as Google recently declared when it announced its new Gemini AI model: “The age of the AI agent is here.He said.
Second, we need to think beyond the cultural frame of Silicon Valley. So far, “Anglophone actors” have “dominated” the debate around AI on the world stage, according to scholars Stephen Cave and Kanta Diehl. A note in the introduction to his book Visualize. That shows America’s technological superiority.
However, other cultures view AI a little differently. Attitudes in developing countries are far more positive than those in the educated, says James Mannika, co-head of the United Nations Advisory Group and senior Google executive, r.He recently told Chatham House..
Countries like Japan are also different. In particular, the Japanese public has been more positive toward robots than their Anglophone counterparts. And this is now also reflected in attitudes around AI systems.
Why did this happen? One reason is Japan’s labor shortage (and many Japanese immigrants are wary of plugging this gap, making it easier to welcome robots). It is another popular culture. Hollywood movies in the second half of the 20th century The terminator Or 2001: A Space Odyssey They were spreading intelligent machines in the Anglophone audience, the Japanese people in Astro Boy A saga that shows robots in a good light.
Its creator is Osamu Tezuka It has been the reason for this in the past In the influence of the Shinto religion, it does not create strict boundaries between living and inanimate objects – unlike the Judeo-Christian traditions. “The Japanese make no distinction between man, the superior being, and the world about him,” he observed earlier. “We easily accept robots about our wider world, insects, rocks – it’s all one.”
That’s reflected in how companies like Sony or Softbank design AI products today, one of the articles in Visualize Notes: These try to create “robots with hearts” in a way that American consumers find disturbing.
Third, this cultural diversity shows that our responses to AI need not be set in stone, but can evolve as technological changes and cross-cultural impacts unfold. Consider facial recognition technologies. In the year In 2017, Ken Anderson, an anthropologist working at Intel, and his colleagues He studied Chinese and American consumer attitudes towards facial recognition devices, and while the former accepted this technology for everyday tasks such as banking, the latter did not.
That difference seems to reflect the seriousness of American privacy concerns. But the same year that study was published, Apple introduced facial recognition devices on the iPhone, which were quickly adopted by US users. Attitudes have changed. The bottom line, then, is that “cultures” are not like Tupperware boxes, sealed and immobile. They resemble slow-moving rivers, with muddy banks, into which new streams flow.
So whatever 2025 brings, one thing that can be predicted is that our perception of AI will change subtly as the technology becomes more common. That may shock some, but it helps us focus on constructively framing the technology debate and ensuring that people are in control of their digital “agents” – not the other way around. Investors today may be getting into AI, but they need to ask what “A” they want in that AI tag.