Please join us in congratulating Dr. Mary Gregg (Ph.D. 2022), winner of the Outstanding Instructor Award at Yonsei University's Underwood International College in Songdo, Republic of Korea!
Enroll Today! Seats Available in Logic II and Paradoxes
Interdisciplinary Workshop on Human Centered AI: How do we Connect?
Interdisciplinary Workshop on Human Centered AI: How do we Connect?
January 13, 2024
8 am-12:15 pm US Eastern Standard Time/2 pm-6 pm Morocco Time
This half-day virtual workshop between the Université Internationale de Rabat and the University of Connecticut will bring computer scientists into conversation with political scientists, philosophers in dialogue with medical professionals. We believe that human-centered AI will only be possible when humans from a truly diverse array of perspectives, backgrounds, and disciplinary training are involved in designing and deploying these powerful tools.
Our workshop will cluster around three complementary themes:
8:15-9:15 am
Panel 1) Interdisciplinary Work in AI: Challenges, Opportunities and Successes
Panelists will provide case studies of successful projects. What worked well? What are the obstacles to interdisciplinary collaboration, and how might we navigate them?
What do computer scientists need from humanists to better do their work? What are humanists missing/misunderstanding about AI development?
Panelists
Anke Finger, LCL and Digital Media and Design, UConn
Ihsane Hmamouchi, Rheumatology, Université Internationale de Rabat
Arash Zaghi ,Civil Engineering, UConn
9:20-10:20 am
Panel 2) Language and AI
How is bias introduced through exclusive language model training? How do we include more language diversity in AI training? How do chatbots alter our language?
Panelists
Kyle Booten, English, UConn
Reda Mokhtar El Ftouh, Law, Université Internationale de Rabat
Adil Bahaj, Biomedicine and AI, Université Internationale de Rabat
10:25 am-11:40 pm
Panel 3) AI and the Social
How can we determine the ethics of AI? How can we understand and ameliorate AI’s role in spreading disinformation via social networks? How will AI affect how humans relate to one another?
Panelists
Ting-an Lin, Philosophy, UConn
Hakim Hafidi, Artificial Intelligence and Network Science, Université Internationale de Rabat
John Murphy, Digital Media and Design, UConn
Meriem Regragui, Law, Université Internationale de Rabat
11 :40-12 :15
Concluding Remarks
This event is the result of a partnership between UConn Global Affairs, UConn Humanities Institute and the Université Internationale de Rabat, Morocco.
William Lycan on Mind, Meaning, and Method
Alex Stamson: “Kinda Radical” Podcast
Law, Politics, and Responding to Injustice
Lewis Gordon: Tavis Smiley
Check out Distinguished Professor Lewis Gordon’s recent media appearance on the Tavis Smiley podcast. Dr. Gordon has worked with Tavis Smiley in the past, discussing topics such as political extremism, black consciousness, and more. In this episode, Smiley and Gordon discuss racial justice and anti-blackness policies, centering the argument around how we can cultivate Black consciousness without fear.
Congratulations, Lewis!
Tracy Llanera: The Moral Agency of White Terror Wolves
This Friday, December 13th, Associate Professor of Philosophy Tracy Llanera will be giving a talk at the event Digital Transformations: Identity, Gender and Affectivity hosted at Cardiff University. Along with Professor Llanera, Dr. Gen Eickers, Dr. Lucy Osler, Dr. Louise Richardson-Self and Dr. Francesca Sobande will also be speaking at the event.
Professor Tracy Llanera will be presenting The Moral Agency of White Terror Wolves, and you can read the abstract of her talk below:
The Moral Agency of White Terror Wolves
Tracy Llanera
This paper investigates the case of “white terror wolves,” or extremists responsible for violent lone attacks committed in the name of white supremacist ideology; examples include Anders Breivik (Norway), Dylann Storm Roof (USA), Brenton Tarrant (Australia), John Earnest (USA), Patrick Wood Crusius (USA), and Stephan Balliet (Germany). Government actors and the media often describe these perpetrators as being mentally ill or brainwashed—a perspective that risks misconstruing mental illness as the key driver for domestic terrorism instead of white extremism. This paper contests this perspective by ascribing moral agency to white terror wolves. Its analysis proceeds in three parts. First, it describes the role of white terror wolves in white extremism and the pernicious framing of their perpetrator identity as being mentally ill. Second, drawing on Alasdair MacIntyre’s moral philosophy, it outlines a conception of moral agency that is relevant to these cases. Third, it interrogates how white terror wolves exercise their moral agency to the point of moral failure.
While the in-person event will be held in Wales, you can join online at 9:30AM London (4:30AM EST), with the event ending at 4:30PM London (11:30AM EST).
Congratulations, Tracy!
Ting-an Lin: AI, Normality, and Oppressive Things
Assistant Professor of Philosophy Ting-an Lin will be giving a public lecture a the Academic Sinica in Taiwan this Friday, December 13th. This talk is a part of their Beyond Gender: Diversity, Plurality, and Philosophy series. Professor Lin will also be joined by Assistant Professor Zhen-Rong Gan from Tunghai University and Hsiang-Yun Chen from Academia Sinica; they will be acting as the discussant and the moderator respectively.
Professor Ting-an Lin will be presenting her paper “AI, Normality, and Oppressive Things,” and you can read the abstract below:
While it is well-known that AI systems can be perniciously biased, much attention has been paid to instances where these biases are expressed blatantly. In this talk, I draw on the literature on the political impacts of artifacts to argue that many AI systems are not merely biased but materialize oppression. In other words, many AI systems should be recognized as oppressive things when they function to calcify oppressive normality, which treats the dominant groups as normal, whereas others as deviations. Adopting this framework emphasizes the crucial roles that physical components play in sustaining oppression and helps identify instances of AI systems that are oppressive in a subtler way. Using instances of generative AI systems as the central examples, I theorize three ways that AI systems might function to calcify oppressive normality—through their content, their performance, and their style. Since the oppressiveness of oppressive things is a matter of degree, I further analyze three contributing factors that make the oppressive impacts of AI systems especially concerning. I end by discussing the limitations of existing measures and urge the exploration of more transformative remedies.
Congratulations, Ting!
Lewis Gordon: Londis Lectureship Speaker
On November 21st, Distinguished Professor Lewis Gordon presented his paper Freedom Relished, Freedom Feared for the James J. Londis and Family Lecture. Held at the 2024 Society of Adventist Philosophers conference, Professor Gordon speaks about the responsibility that comes with freedom, and that we must exercise our right to choice, despite the fear or insecurity that may accompany it. Gordon also suggests that in order to maintain a healthy society, we must communicate with one another despite disagreements: we need to “[develop] ways of living together on this every shrinking planet.”
To read the full summary article on the lecture, please see the Spectrum website.