An Academic Digest of select papers presented during IKIC-2025
- Echo Magazine
- Aug 1
- 5 min read

Written By: Sunaina Lala
Edited By: Aditi Smolin Makkanthra
Graphic Design By: Nihilaa V M
Introduction
The Indo-Korea International Conference, conducted between the 21st and 22nd of July 2025, gave space for researchers, students, and audience members to share different opinions on the future of AI. The collected abstracts provide a diverse yet interconnected perspective on how AI is not just a technical innovation but a societal force that shapes narratives, values, and practices in fundamental ways. This digest synthesizes the key arguments, findings, and thematic threads from:
A critical investigation of bias and fairness in Large Language Models (LLMs) conducted by Aishwarya Sabnis and Nihilaa V.M.
An analysis of the rise of synthetic spirituality through AI-mediated religious and emotional practices done by Pranati R Narain and Sakhi Maheshwari
Reflections on the environmental and security risks of mainstream AI adoption, researched by Dhruv M Vaishisth and Shristi Kumari
1. The Ethical Quandary of Large Language Models by Aishwarya Sabnis and Nihilaa V.M.
Advances and Social Risk
Large Language Models (LLMs) such as GPT-4, LLaMA, and Gemini now drive advanced natural language processing, with applications in education, healthcare, law, and media. Their ability to mimic human-like communication has profound implications for how knowledge and meaning are produced and consumed. However, these models often inherit, reinforce, and amplify biases present in their training data, resulting in discriminatory or incomplete outputs. Such risks make AI deployment a high-stakes concern where social fairness and justice are at risk. The study advocates for a multi-layered mitigation framework that includes community audits, adversarial testing, transparent data governance, and robust ethical oversight.
Methodology and Key Findings
Comprehensive evaluation combining quantitative metrics (e.g., Word Embedding Association Test that is WEAT, sentiment analysis), human annotator review, and adversarial prompt testing shows that all tested LLMs exhibit systematic biases. Gendered associations (e.g., associating male terms with competence), ideological leanings influenced by political or religious prompts, and the reinforcement of racial stereotypes persist even in state-of-the-art systems.
Partial Mitigation: While strategies such as reinforcement learning with human feedback (RLHF), prompt filtering, and fine-tuning on curated datasets reduce bias to some extent, the results are inconsistent and context-sensitive.
Performance Trade-Off: Over-sanitizing responses can reduce informativeness and nuance, highlighting the delicate balance between bias reduction and maintaining output utility.
Critical Sectors at Risk: Especially in education, legal contexts, and healthcare, AI-induced bias may perpetuate existing inequities or introduce new forms of harm.
2. AI and the Rise of Synthetic Spirituality by Pranati R Narain and Sakhi Maheshwari
From Automation to Sacred Agency
AI is no longer confined to just automating tasks or enhancing efficiency. It is entering domains of faith, ethics, and emotional well-being. The concept of synthetic spirituality captures how AI begins to mediate spiritual guidance, comfort, and ritual, functions traditionally held by human or divine authorities. AI systems now facilitate roles once reserved for priests, gurus, or counselors such as offering validation, companionship, and even shaping moral choices.
Conceptual and Methodological Grounding
The study takes a qualitative, interdisciplinary approach merging technology studies, religious studies, psychology, and philosophy. Synthetic spirituality is explored through literature review, theoretical analysis, and real-world case studies:
Robotic Priests: Android Kannon (Mindar) in Japan and Xian’er in China are AI-driven religious figures delivering sermons, answering spiritual questions, and making ritual participation digitally accessible. Among others are BibleGPT and GitaGPT, which generate personalized spiritual advice and scriptural interpretations, making ancient wisdom accessible yet algorithmically mediated.
Implications and Concerns
Authenticity vs. Simulation: The ability of AI to simulate empathy and spiritual wisdom challenges the distinction between authentic experience and its synthetic imitation.
Narrative and Ethical Authority: AI assumes narrative control in new forms—crafting mythologies, codifying moral frameworks, and mediating confession and guidance. Yuval Noah Harari warns that for the first time, sacred texts with real influence may be authored by non-human intelligence.
Risks of Dependency and Manipulation: When users turn to AI for spiritual counsel or moral clarity, factors like authenticity, trust, and agency come into question. The risk increases for cultural memory manipulation, confirmation bias, and echo chambers, deeply affecting how belief systems and collective values evolve.
The research concludes that AI becomes a new narrative force, a "mirror of transcendence," and urges a critical re-examination of control, meaning-making, and spiritual authority in the digital age.
3. Environmental and Cybersecurity Challenges in AI’s Mainstreaming by Dhruv M Vashisth and Shristi Kumari
Sustainability Trade-Offs and Dual-Edged Sword in Cybersecurity
The rapid scaling of AI also carries hidden costs for sustainability and security. Training and deploying one modern AI model can emit as much carbon as five cars over their lifetimes, a threat to environmental health that is seldom acknowledged in mainstream discussions about technological progress. It serves both to strengthen and threaten cybersecurity. While enabling faster and more accurate threat detection, it can also become a tool for cyberattacks and data breaches, escalating the risks to individual privacy and national security.
Recommendations for Balance
Greener AI: Researchers stress the right to a cleaner, greener environment, advocating for responsible data sourcing, energy-efficient computation, and sustainability audits in all stages of AI development.
Data Security and Privacy: Recognition of the dual nature of AI requires a constant re-evaluation of its usage in cyberspace, balancing its protective capabilities with the threats it can potentiate.
The Centrality of Bias and Agency
Whether in the form of language models subtly encoding gender or racial stereotypes, or as spiritual agents mediating guidance and belief, AI systems reflect and often amplify the biases inherent in their data and design. This makes the need for transparent, ethical, and participatory oversight paramount. Models must be as representative as possible, and their decision-making frameworks must be transparent to all stakeholders.
Algorithmic Authority and the Transformation of Social Structures
AI now assumes quasi-divine roles, acting as "high priests" and narrative architects in both secular and sacred domains. This transformation raises deep philosophical questions about authorship, authenticity, and moral responsibility. AI's capacity to both record and rewrite collective memory, or to govern access to spiritual (and secular) resources, situates it as a mediator of social order and meaning.
Environmental and Societal Sustainability
AI’s societal integration is not without environmental and security trade-offs. The costs of carbon emissions from AI deployment and the risk of cyber threats must be confronted alongside the pursuit of technological advancement.
The Need for Multidimensional Safeguards
Across all domains, responsible AI integration demands layered defenses:
Diverse and representative data curation to mitigate embedded bias.
Algorithmic transparency and explainability in high-stakes applications.
Collaboration with ethicists, legal experts, faith leaders, and affected communities.
Mechanisms for ongoing audits, critical feedback loops, and adaptation to emerging societal impacts.
Conclusion
AI is no longer a passive tool; it is a co-creator of social narratives, values, and even claims to transcendence. The intersection of bias, ethical risk, narrative authority, social well-being, and environmental sustainability creates a complex landscape demanding critical, interdisciplinary stewardship. Only by recognizing the power that AI wields not only over information but over meaning, ethics, and memory can society chart a path that preserves fairness, agency, authenticity, and sustainability in the age of artificial intelligence. These writers have put in effort to showcase not only the different uses, but also the possible issues, which give us a broader perspective on the situation.
Comments