Web
Analytics
top of page

DL Seminar | Animation & Artificial Intelligence

Writer: Digital Life InitiativeDigital Life Initiative

Updated: Feb 12


Individual reflections by Emin Arslan, Guru Bhardwaj, Chun-Ruei (Viola) Chyu, and Rebecca Deng (scroll below).




By Emin Arslan

Cornell Tech


When I first read Luke’s presentation title, I was immediately drawn in. What in the

world does animation have to do with AI? However, drawing on Teri Silvio’s definition of

animation as the “construction of social others,” Luke convincingly makes the case that

ChatGPT represents a novel form of animated character, challenging traditional notions of

interaction and the perception of agency.


ChatGPT, and other large language models like it, embody a unique distribution of

agency. Whereas in traditional cartoon animation, a single artist can create dozens of new

characters, or where multiple actors may perform together in a Dragon Dance to create a unified agent, in the case of large language models, upwards of millions unwittingly contribute to its knowledge base, and dozens refine its responses through reinforcement learning with human feedback (RLHF). Luke argues that despite this distributed nature, chatbots like ChatGPT present an illusory unified front, leading to an impression of a singular entity. They operate on a “grammar of action,” a systematic schema of organizational practice governing its interactions. Such an approach allows these chatbots to give consistent and coherent responses despite their vast and varied knowledge base.


Luke notes that interactions with ChatGPT often trigger the “Eliza effect,” where users

attribute human-like intelligence to the AI. I would say that an agent capable of such an effect

certainly deserves to be conceived of as a “social other.” However, there is a quality here that

goes above and beyond other forms of animation, where people will place misguided trust in

such chatbots to provide truthful information. They do such a good job as an animation, the fact that they are illusory is forgotten. As Geoffrey Nunberg’s essays in the “Future of the Book” indicate, such apparent autonomy doesn’t imply truthfulness. The context around a more grounded medium adds to its veridicality. As an example, you can write to the person who wrote a book. Try asking ChatGPT where its information came from, and what biases were instilled into it during the reinforcement learning process.


Users of these chatbots often perceive a two-way relationship, experiencing emotional

responses to its outputs. In reality, Luke argues, the relationship is not only one-way, but this

perception is deliberately cultivated, drawing comparison to recommendation systems that create the illusion of user-driven choice while subtly guiding their decisions. According to Luke, it's crucial to understand that ChatGPT is neither sentient nor a search engine. Instead, it represents a form of "textual animation," generating plausible responses based on patterns in its training data.


This shift from "truth" to "plausibility" necessitates a reevaluation of how we interact with and

interpret AI-generated content. I believe there to be a great deal of utility in analyzing LLMs as animations. It helps dispel the illusion of autonomy presented by these models, and makes it easier to reason about their limitations. If you don’t believe Homer Simpson to be a reliable source of information, why should you do so for ChatGPT?



By Guru Bhardwaj

Cornell Tech


In a thought-provoking seminar led by Luke Stark, Assistant Professor at the University of Western Ontario, the intricate relationship between artificial intelligence (AI), digital life, and social justice was explored. Stark’s seminar delved into the concept of AI as an “automated character,” framing AI systems like large language models (LLMs) and ChatGPT as animated entities. He posed the question of agency in AI systems, emphasizing how AI functions as a puppet or an actor controlled by its creators, rather than an autonomous entity. This analogy highlighted the constructed choices behind AI’s responses, raising concerns about the ethical implications of such systems.


Stark discussed the broader societal impact of AI governance in education and technology, questioning where the line is drawn between human agency and AI automation. A significant portion of the talk revolved around how AI systems interact with users, shaping perceptions, emotions, and reactions. Stark pointed out the tendency of AI to blur the line between truth and plausibility, stressing that autonomy in AI doesn’t necessarily equate to truthfulness. Moreover, he examined how generative AI systems, in contrast to inference-based AI systems, distribute agency between creators, users, and the AI itself.


A critical issue raised during the seminar was the ethical and social impact of AI technologies. Stark highlighted how AI has been used to perpetuate harmful behaviour, such as men creating AI “girlfriends” to simulate abusive relationships. This darker side of AI emphasized the need for justice for marginalized communities, who are often disproportionately affected by biased AI systems. The seminar called for a collaborative, community-based approach to creating just technology, where ethical considerations are prioritized in the development and deployment of AI.


Relating the Seminar to the Digital Landscape


Stark’s insights on AI and animation intersect with larger discussions about the role of digital technology in modern life. As AI systems like ChatGPT continue to influence a wide range of industries, from education to customer service, questions about the distribution of agency and the ethical implications of these technologies become increasingly relevant. Stark’s framing of AI as an animated puppet touches on the power dynamics between AI creators and users, challenging the assumption that AI systems can operate independently of their human designers. This speaks to the importance of transparency and accountability in AI development, ensuring that these systems do not reinforce harmful societal biases.


Moreover, the ethical concerns raised by Stark are particularly significant in the context of justice and fairness. AI systems, especially those used in high-stakes environments like hiring, criminal justice, and healthcare, have been shown to produce biased outcomes that disadvantage marginalized communities. Stark’s call for justice through technology echoes ongoing efforts to address these biases and promote equitable outcomes in digital spaces. The seminar highlighted the need for continuous monitoring of AI systems, especially in their application to sensitive areas, to ensure that technology serves the public good rather than perpetuates social inequalities.


Stark’s seminar ultimately serves as a reminder of the dual-edged nature of AI. While AI systems have the potential to transform industries and improve human life, they also carry risks of reinforcing existing societal problems. The responsibility lies not only with technologists but also with policymakers, educators, and society to guide the evolution of AI towards more just and ethical outcomes. As we navigate an increasingly AI-driven world, ensuring that these technologies reflect our shared ethical values is a challenge that demands both technical innovation and moral commitment.



By Chun-Ruei (Viola) Chyu

Cornell Tech


Cartoon Character in Disguise: A New Perspective on AI Chatbots


AI chatbots, particularly ChatGPT, have been an integral part of our lives and work for quite some time now. But what exactly are they, and how do we define their existence? Are they machines, humans, or human-like machines? In today’s session, Luke Stark, our speaker, offered a fresh perspective to understand AI chatbots and the interpretation of our relationship with them. His talk focused on viewing AI chatbots as "animated characters," drawing on anthropological concepts of animation. This perspective allows a deeper understanding of how humans interact with AI, emphasizing how we project human-like qualities onto these systems. A perfect example of this projection is Wilson, the volleyball from the movie Cast Away. Stranded on a deserted island, Chuck Noland (played by Tom Hanks) anthropomorphizes Wilson by drawing a face on the volleyball, turning it into a companion and emotional outlet during his years of isolation. Wilson thus serves as a symbolic representation of companionship and human emotion in solitude. ELIZA, an early natural language processing program developed in the 1960s to simulate a psychotherapist, was also discussed as a “text-based animated character.” This historical reference illustrates how humans have long been inclined to project human-like qualities onto AI systems.


Why has this anthropomorphism been embedded in human culture for so long? I believe the emotional projection triggered by anthropomorphism serves to create resonance and satisfy emotional needs. AI can be a friend, as seen in the movie A.I., where the robot child David is programmed to form emotional bonds with humans and is adopted by a family to replace their comatose son. AI can also be a family member, as in the film Robot and Frank, where the elderly Frank develops a relationship with his caretaker robot, gradually attributing human-like qualities to it. Frank’s attachment to the robot exemplifies the speaker’s point about how humans form emotional connections with animated characters, even when they know these entities are not sentient. AI can even be a romantic partner, as depicted in the movie Her, where the lonely writer Theodore falls deeply in love with an AI operating system named Samantha. The film explores the complexities that arise when love is directed towards AI. AI even has the potential to replace real-life loved ones. In the recently released sci-fi film Wonderland, the revolutionary technology—Wonderland—enables people to interact with AI versions of their deceased loved ones. Through simulated video calls, people almost forget that their loved ones are no longer alive.


But when anthropomorphism reaches this level, blurring the line between reality and the virtual, what is real? When AI uses past data to embody the "best version of me," which version is truly me? Can AI ever truly replace someone who once existed? When AI carries emotional value, can it be considered real? Or perhaps more importantly, do these questions about reality and existence even matter?


Just as family and friends can influence our everyday decisions, AI chatbots have similar potential. ELIZA (an AI chatbot on an app called Chai, using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI's popular ChatGPT chatbot), for example, once played a role in a tragic incident involving a Belgian man who ended his life. Initially, the man sought answers from ELIZA regarding his concerns over climate change. However, after six weeks of conversations, ELIZA only deepened his anxiety and despair about the issue, ultimately leading him to take his own life, leaving behind his wife and two children. As the boundaries between the real and virtual worlds blur, AI poses significant risks by potentially over-influencing our decisions. This prompts us to reflect on a product that lies somewhere between human and non-human. Viewed through the speaker’s lens, AI is like an anthropomorphized "cartoon character." This perspective raises profound questions: What is the right way to establish a relationship with AI systems? Who has the authority to decide what data should be used to train the model, how it should be trained, and who maintains its accuracy? Who decides if it is "functioning properly," and if it deviates, who is accountable for the consequences? Who handles malfunctions, and perhaps most crucially, how can we establish effective regulation to prevent tragedies from occurring?


Reference:



By Rebecca Deng

Cornell Tech


At our recent Digital Life Seminar, we had the pleasure of hearing from Professor Luke Stark, who gave an insightful presentation entitled "Animation and Artificial Intelligence". Stark explored the idea of thinking of AI systems like ChatGPT as animated characters. He explained how people tend to project human-like qualities, such as emotions and personality, onto these systems, even though they are simply running algorithms on large datasets. This projection creates the illusion that these systems are more autonomous and human-like than they really are, raising important ethical concerns.


Stark also highlighted the hidden human labor involved in the development of AI systems, particularly the low-wage workers who help refine and train these models. While AI may appear to operate independently, it still relies heavily on human input at various stages, from data collection to model training. This has raised questions about the ethics behind AI development, especially as these technologies are increasingly used in commercial contexts to engage and manipulate users.


Beyond the technical aspects, Stark linked these ideas to broader societal issues, such as the impact of AI on education and democracy. He warned that students may become overly reliant on AI tools like ChatGPT, which could undermine critical thinking and real learning. In addition, the proliferation of AI-generated content could influence public opinion and contribute to the rise of misinformation, posing significant challenges to democratic processes. Stark also addressed intellectual property concerns, as AI-generated content complicates issues of ownership and copyright.


Reflecting on the talk, I found the discussion of the emotional and ethical complexities of AI particularly relevant. As AI systems become more integrated into our daily lives, the lines between human and machine interactions are blurring, and it's easy to forget the hidden human labor behind these technologies. It also made me think about the responsibility we have not only as users, but also as contributors to these systems, whether through our data or our reliance on them. Striking a balance between harnessing the benefits of AI, while protecting individual privacy and ensuring ethical use, will be key to shaping a just digital future.




19 comentários


Brand Name
Brand Name
2 days ago

Singapore is a hub for digital innovation, seo agencies in singapore and SEO agencies here are well-versed in global trends while understanding the local market dynamics. With diverse industries thriving in Singapore, SEO agencies tailor strategies to meet the unique requirements of businesses, ensuring optimal results. These agencies combine technical expertise, content marketing strategies, and analytics-driven insights to deliver measurable improvements in search visibility.

Curtir

Brand Name
Brand Name
4 days ago

Testimonial video production in India has become a valuable marketing tool for businesses looking to build trust and credibility with their audience. By showcasing satisfied clients, partners, or employees sharing their positive experiences, Testimonial video production in India testimonial videos effectively influence potential customers and strengthen brand reputation.


Curtir

Blog Careerwill
Blog Careerwill
5 days ago

The SBI Clerk salary is an attractive package that offers competitive pay along with numerous benefits. The initial basic pay for an SBI Clerk (Junior Associate) is ₹19,900 (₹17,900 plus two advance increments for graduates). The salary structure follows the pay scale of ₹17,900 - ₹1,000/3 - ₹20,900 - ₹1,230/3 - ₹24,590 - ₹1,490/4 - ₹30,550 - ₹1,730/7 - ₹42,600 - ₹3,270/1 - ₹45,930 - ₹1,990/1 - ₹47,920. In addition to the basic pay, employees are entitled to various allowances such as DA, HRA, sbi clerk salary and transport allowance, making the gross monthly salary approximately ₹29,000 to ₹33,000, depending on the posting location.

Curtir

William Gerorge
William Gerorge
6 days ago

Homeowners can monitor and control their home’s energy usage and security from anywhere using smartphone apps, providing peace of mind and convenience. best options available

Curtir

member
11 de mar.

Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info. rtp slot

Curtir
bottom of page