top of page

Artificial Intelligence: the Quintessential Ambivalent Technology

  • Writer: Russell E. Willis
    Russell E. Willis
  • Jul 23
  • 6 min read

Updated: Jul 25

Episode 1 of the AI Strategist Series (1)

ree

Ambivalent Technology (2)


In 1986, historian of technology Melvin Kranzberg articulated what would become known as Kranzberg's First Law: "Technology is neither good nor bad; nor is it neutral."(3) This deceptively simple statement captures a profound truth about the relationship between human society and technological innovation. Kranzberg's insight reveals that technology exists in a state of moral ambivalence—it is not inherently beneficial or harmful, yet it is never without consequence. As we grapple with the rapid advancement of artificial intelligence, Kranzberg's Law provides an essential framework for understanding how AI exemplifies the complex, ambivalent nature of transformative technologies.


Understanding Kranzberg's Law


Kranzberg's Law challenges both technological determinism and technological neutrality. It rejects the notion that technology automatically drives social change in predetermined directions, while simultaneously denying that technology is a passive tool with no inherent bias or influence. Instead, the law recognizes that technology's impact depends entirely on context—how it is designed, deployed, regulated, and integrated into existing social structures.

The concept of "ambivalent technology" emerges from this understanding. Ambivalent technologies possess the capacity for both tremendous benefit and significant harm, often simultaneously. They create new possibilities while eliminating others, solve certain problems while generating new ones, and empower some groups while potentially disadvantaging others. The same technological capability can be liberating in one context and oppressive in another, beneficial for one community and harmful for another.

AI as the Quintessential Ambivalent Technology


Artificial intelligence represents perhaps the most striking example of ambivalent technology in our current era. AI systems demonstrate remarkable capabilities that can enhance human productivity, solve complex problems, and improve quality of life across numerous domains. Yet these same systems raise profound concerns about privacy, employment, bias, and the concentration of power. The ambivalence is not accidental—it stems from the fundamental nature of AI as a general-purpose technology that amplifies human capabilities and intentions, both positive and negative.

Consider AI's impact on healthcare. Machine learning algorithms can analyze medical images with superhuman accuracy, potentially detecting cancers earlier and saving lives. AI-powered drug discovery platforms can identify promising therapeutic compounds in months rather than years. Predictive models can help hospitals allocate resources more efficiently and identify patients at risk of complications. These applications represent AI's tremendous potential for human benefit.


However, the same underlying technologies that enable these medical breakthroughs also create new vulnerabilities. AI diagnostic systems can perpetuate or amplify existing biases in medical data, leading to disparate health outcomes for different demographic groups. The collection and analysis of vast amounts of health data raises privacy concerns and questions about data ownership. The high cost of developing and implementing AI systems may exacerbate healthcare inequalities between wealthy and resource-poor communities.


The concept of technology as ambivalent is increasingly essential for comprehending moral, economic, and political responsibility in various contexts. In this essay I will introduce three of these contexts: 1) employment, 2) bias and fairness, and 3) surveillance and privacy.


The Employment Paradox


AI's impact on employment illustrates another dimension of technological ambivalence. Automation powered by AI has the potential to eliminate dangerous, repetitive, and physically demanding jobs, freeing humans to pursue more creative and fulfilling work. AI tools can augment human capabilities, making workers more productive and enabling them to focus on higher-value tasks that require emotional intelligence, creativity, and complex problem-solving.


Simultaneously, AI threatens to displace millions of workers across various sectors. Unlike previous waves of automation that primarily affected manual labor, AI systems can now perform cognitive tasks previously thought to require human intelligence. Customer service representatives, financial analysts, radiologists, and even software developers face potential displacement by AI systems. The benefits of increased productivity may accrue primarily to capital owners rather than workers, potentially exacerbating economic inequality.


The ambivalence becomes even more complex when considering that AI's impact on employment is not uniformly distributed. While some jobs disappear, others are created in AI development, deployment, and maintenance. However, these new opportunities often require different skills and may be located in different geographic regions than the displaced jobs. The transition period creates winners and losers, and the social and political consequences of this disruption can be severe.


Bias and Fairness Challenges


AI's relationship with bias exemplifies Kranzberg's insight that technology is never neutral. AI systems learn from data that reflects historical patterns of human behavior and decision-making. When this data contains biases—whether conscious or unconscious—AI systems can perpetuate and amplify these biases at scale. Facial recognition systems that perform poorly on darker skin tones, hiring algorithms that discriminate against women, and criminal justice risk assessment tools that exhibit racial bias all demonstrate how AI can embed and institutionalize unfairness.


A particularly problematic type of bias arises from embedded errors. AI relies on precise algorithms to sift through vast datasets, accepting their accuracy, even though some data is incorrect due to factors like data entry errors, unchallenged theories and formulas, and information intended to mislead for political, economic, religious, or personal motives.


Yet AI also offers unprecedented opportunities to identify and address bias. Machine learning techniques can detect patterns of discrimination that might be invisible to human observers. AI systems can be designed with fairness constraints and audited for biased outcomes. The same computational power that can perpetuate bias can also be harnessed to create more equitable systems, provided there is sufficient commitment to fairness in design and implementation.


Surveillance and Privacy


The ambivalent nature of AI becomes particularly apparent in discussions of surveillance and privacy. AI-powered surveillance systems can enhance public safety by detecting threats, finding missing persons, and preventing crimes. Smart city technologies can optimize traffic flow, reduce energy consumption, and improve emergency response times. These applications demonstrate AI's potential to create safer, more efficient communities.


However, these same technologies enable unprecedented levels of surveillance and social control. Facial recognition systems can track individuals' movements and associations, creating detailed profiles of daily life. Predictive policing algorithms may perpetuate discriminatory law enforcement practices. The integration of AI into surveillance infrastructure creates the technical foundation for authoritarian control, even if that is not the initial intention.


The Path Forward


Kranzberg's Law suggests that the ultimate impact of AI will depend not on the technology itself, but on the choices we make about how to develop, deploy, and govern it. This recognition places enormous responsibility on technologists, policymakers, and society as a whole. We cannot assume that AI will automatically benefit humanity, nor can we dismiss it as inherently harmful. Instead, we must actively shape its development and deployment to maximize benefits while minimizing harms.

This requires robust public discourse about AI's societal implications, inclusive development processes that consider diverse perspectives and needs, and adaptive governance frameworks that can evolve with the technology. It demands investment in education and retraining programs to help workers navigate AI-driven economic transitions, and strong institutions to ensure that AI's benefits are broadly shared rather than concentrated among a privileged few.


Conclusion


Kranzberg's Law reminds us that technology's ultimate value lies not in its inherent properties, but in how it interacts with human values, institutions, and choices. Artificial intelligence, as the defining technology of our era, embodies this ambivalence completely. It offers tremendous potential for human flourishing while simultaneously posing significant risks to privacy, equality, and human agency.


Understanding AI through the lens of Kranzberg's Law helps us avoid both naive techno-optimism and paralyzing techno-pessimism. Instead, it calls us to engage actively and thoughtfully with this powerful technology, recognizing that its impact on society will ultimately reflect our collective wisdom, values, and choices. The future of AI is not predetermined—it is a responsibility we all share in shaping.


NOTES

(1) Please browse my blog series Being Responsible in the Age of Social Media, Cryptocurrency, and Smart Weapons/Cars/Phones (2018) for an overview of my philosophy and ethics of technology. "Artificial Intelligence: the Quintessential Ambivalent Technology" is the first in a series to apply AI to the model. My foray into these topics was my dissertation: Toward a Theological Ethics of Technology: An Analysis in Dialogue with Jacques Ellul, James Gustafson, and Philosophy of Technology. Ann Arbor, Michigan; University Microfilms International. 1990.  I revisited the model for "Complex Responsibility in an Age of Technology," in Living Responsibly in Community, ed. Fredrick E.  Glennon, et al. (University Press of America, 1997).  

(2) The term "ambivalent technology" is most closely associated with Langdon Winner, a political theorist of technology. While he may not have been the first to use the exact phrase, he prominently explored the ambivalence of technology in his influential 1978 book Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (MIT Press, 1978). This book, along with other works by Winner, were a central resource for my dissertation.

(3) Kranzberg, Melvin. "Technology and History: 'Kranzberg's Laws." Technology and Culture 27/3 (July 1986): 54-60.


Photo by Igor Omilaev from Unsplash


© 2025 Russell E. Willis

__________________________________________________________________________________

by Russell E. Willis


Got Vision?


If I can help you, your business, or your organization with copywriting (white papers, blogs, web content, case studies, or emails); Ghostwriting (Books and articles -- specializing in converting blog and podcast series into print or ebooks); or short-form "explainer" videos, please check out REWillisWrites.com (and fill out the contact form) or email me at rewilliswrites@gmail.com.


I am also available for strategic planning consultations for organizations and creatives, especially those seeking direction in an age of AI and those waging a battle against the Climate Crisis. Contact me at REWillisWrites@gmail.com for more information.



 
 
 

Comments


Contact REWillisWrites
For a free (no obligation) consultation

or connect with me at:
rewilliswrites@gmail.com
or
802-233-3242

©2025 BY RUSSELL E. WILLIS. PROUDLY CREATED WITH WIX.COM

bottom of page