Featured
- Get link
- Other Apps
We need to prepare for ‘addictive
intelligence’
—By Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School whose work focuses on computational law, and Pat Pataranutaporn, a researcher at the MIT Media Lab who studies human-AI interaction.
Worries about AI often imagine doomsday scenarios where
systems escape control or even understanding. But there are nearer-term harms
we should take seriously: that AI could jeopardize public discourse; cement
biases in loan decisions, judging or hiring; or disrupt creative industries.
However, we foresee a different, but no less urgent, class
of risks: those stemming from relationships with nonhuman agents.
AI companionship is no longer theoretical—our analysis
of a million ChatGPT interaction logs reveals that the second
most popular use of AI is sexual role-playing. We are already starting
to invite AIs into our lives as friends, lovers, mentors, therapists, and
teachers. Even the CTO of OpenAI warns that AI has the potential to be “extremely addictive.”
- Get link
- Other Apps
Popular Posts
- Get link
- Other Apps
- Get link
- Other Apps
Comments
Post a Comment