Anyone who has followed the rhetoric about artificial intelligence in recent years has likely heard some version of this: Claims that AI is inevitable. The common theme is that AI is already here, it is essential, and those who are weak about it are doing themselves a disservice.
In the business world, AI advocates are telling companies and employees: they will fall behind Failure to integrate generative AI into your operations. In science, AI proponents promise: AI will contribute to the treatment of previously incurable diseases.
in higher educationAI advocates are warning teachers that students must learn how to use AI or risk losing their competitive edge when applying for jobs.
And when it comes to national security, AI advocates argue that either states invest heavily in AI weapons or they don’t. We are at a disadvantage against the Chinese and Russians.who are already doing so.
The arguments across these various areas are essentially the same. In other words, the days of being skeptical about AI are over. Whether you like it or not, technology will shape the future. You have a choice: learn how to use it or be left behind in that future. Those who try to stand in the way of technology are as hopeless as the handloom weavers. resisted mechanical looms in the early 19th century.
Over the past few years, my colleagues and I have Applied Ethics at Massachusetts Boston center I study the ethical issues raised by the widespread use of AI, and I think arguments of inevitability are misleading.
history and hindsight
In fact, this argument is the latest version of the deterministic view of technological development. It’s the belief that once people start innovating, there’s no stopping them. In other words, some genies don’t go back in the bottle. The best thing you can do is use them for your own good purposes.
This deterministic approach to technology has a long history. It has been applied under the influence of printing machinesimilarly, Popularization of automobiles and the infrastructure necessary for themetc. are being developed.
(Credit: Bbeachy2001/Wikimedia Commons, CC BY) The dominance of the automobile and the infrastructure that supports it for decades seems inevitable in hindsight.
But when it comes to AI, I think the argument for technological determinism is overblown and oversimplified.
AI in the field
Consider the argument that companies cannot afford to walk away from the AI ​​game. In fact, there has yet to be any evidence that AI delivers significant productivity gains for companies that use it. According to a July 2024 report in The Economist, the technology has so far There was little economic impact.
The role of AI in higher education also remains an open question. In the past two years, the university has invested heavily There is evidence that they may have jumped on board with AI-related efforts.
This technology serves as an interesting educational tool. for example, Plato chatbot It’s a great gimmick that allows students to have text conversations with a bot pretending to be Plato.
However, some of the best tools teachers have for assessment and critical thinking, such as assignment creation, are already beginning to be replaced by AI. University essays are go the way of dinosaur More and more teachers are abandoning their ability to tell whether students are writing their own papers. What is the cost-benefit argument for abandoning writing, an important and useful traditional skill?
In science and medicine, the use of AI looks promising. its role Understand protein structureFor example, it could be important in treating diseases. What is that technology? Medical care will also be transformed. imaging We have contributed to accelerating the drug discovery process.
But that excitement may be overstated. AI-based prediction of which cases of coronavirus infection will become severe failed spectacularlydoctors Over-reliance on technology’s diagnostic capabilitiesoften against one’s own better clinical judgment. So even in this area of ​​great potential, it’s unclear what impact AI will ultimately have.
In retrospect, using AI to help diagnose COVID-19 patients was problematic.
In national security, the arguments for investing in AI development are compelling. The stakes could be high, so if China and Russia are developing AI-driven autonomous weapons, the U.S. I can’t afford to be late, I actually buy it..
But surrendering completely to this form of reasoning, as tempting as it may be, is likely to lead the United States in the following direction: overlooking disproportionate effects These systems apply to countries that are too poor to participate in the AI ​​arms race. Major powers could deploy this technology in conflicts between these countries. And just as importantly, this argument favors arms races over arms control and downplays the possibility of working with adversaries to limit military AI systems.
one step at a time
As we examine the potential importance and risks of AI in these various areas, it is worth adopting a degree of skepticism about this technology. I believe that AI should be introduced gradually, with a nuanced approach, rather than a blanket insistence on its inevitability. Two things should be kept in mind when developing this cautious view:
First, companies and entrepreneurs working on artificial intelligence are clearly interested in technologies that are recognized as inevitable and necessary because they make a living from implementing it. It is important to pay attention to who is claiming necessity and why.
Second, it is worth learning lessons from recent history. Over the past 15 years, smartphones and the social media apps that run on them have come to be seen as a reality, a technology. It is both inevitable and transformative.. after that, data started is emerging and the mental health harm they cause to teenagers, especially young girls. School districts across the United States began to ban phone This is to protect students’ concentration and mental health. some people have I went back to using flip phone Avoiding your smartphone will change your quality of life.
After a long experiment with children’s mental health, spurred by claims of technological determinism, Americans changed tack. What seemed fixed turned out to be changeable. There is still time to avoid repeating the same mistakes regarding artificial intelligence, which could have a major impact on society.
Nir Eisikowitz is a professor of philosophy and director of the Center for Applied Ethics at the University of Massachusetts Boston. This article has been republished from conversation under Creative Commons License. please read original article.