Speakers



Dr. Diptikalyan Saha

IBM Research, India

Title: Navigating the Landscape of AI Agents: Characteristics, Innovations, and Future Directions

Abstract AI agents are becoming increasingly efficient at automating various tasks including some of the key tasks in software engineering through its use of LLM to reason, plan, refine, and interact with their environment. In this talk, we will start with the motivation for creating AI agents and some history of the technical breakthroughs in agentic technologies. We will discuss some of the key characteristics of AI agents and key works in this area like ReACt, Reflection, and Reflexion. Additionally, we’ll go through some of our recent works such as NL2SQL and automated API testing where we have used AI agents. Finally, we’ll discuss topics such as testing and the reliability of AI agents. We will conclude with some directions for future work.







Karthik Vaidhyanathan

IIIT-Hyderabad, India

Title: Playing with Abstractions: At the crossroads of Software Architecture and Generative AI

Abstract Text is a powerful abstraction of reality, like architecture abstracts complex software systems. The advent of Generative AI specifically Large Language Models (LLMs) has set new benchmarks in understanding and generating human-like text, revolutionizing multiple sectors. In this talk, we explore some of these capabilities of LLMs to understand whether LLMs can be architect's new best friend. We begin with an overview of Large Language Models (LLMs) and exploring their role in generating design decisions through our research on leveraging GenAI for architecture knowledge management. We then explore their impact on automating component generation based on our ongoing research in the serverless context. Further, we provide insights on LLMs’ capabilities for supporting runtime architectural adaptation, highlighting our efforts on building Generative AI-powered autonomous CloudOps Copilot in collaboration with our industrial partner. The talk concludes by listing different opportunities and challenges, drawing insights from our ongoing efforts involving Small Language Models as well as in architecting multi-agent framework for cloudOps domain.

Rajaswa Patil

Applied AI Consultant; Previously: Postman/Microsoft

Title: From Code to Cognition: How Agentic AI Unites Software 1.0 and Software 2.0

Abstract When GPT-3 was released in 2020, we entered the “Software 2.0” era, where tasks previously implemented with deterministic programming in formal languages (Software 1.0) started to convert into large data-driven probabilistic models. At first, there was a misguided rush to replace Software 1.0 workflows with Software 2.0 systems as Generative AI seemed to be eating the Software 1.0 stack. But as Software 2.0 systems scaled, big problems emerged: unreliability, unbounded autonomy, lagging context, and inability to interact predictably and structurally with the real digital & physical world. This has given rise to Agentic AI, where Software 2.0 systems are increasingly borrowing concepts from Software 1.0. By using APIs for structured interaction, state machines for predictable autonomy, context-free grammar for guided output, and code interpretation for precise computations, Agentic AI combines the best of both worlds and unlocks new opportunities in AI Research and Software Engineering. This talk will explore this trend reversal through real-world case studies, recent research and examples from leading Agentic AI frameworks. We’ll see how frontier tech labs and companies like Postman, GitHub and Perplexity are scaling generative AI to millions of users by combining Software 1.0 and Software 2.0. We’ll also discuss how new Agentic AI startups in the “Service-as-Software” economy are using Software 1.0 tooling as a core part of their Generative AI architecture.

Dr. Alexander Serebrenik

Eindhoven University of Technology

Title: Exploring the Effect of Multiple Natural Languages on Code Suggestion Using GitHub Copilot

Abstract GitHub Copilot is an AI-enabled tool that automates program synthesis. It has gained significant attention since its launch in 2021. Recent studies have extensively examined Copilot’s capabilities in various programming tasks, as well as its security issues. However,little is known about the effect of different natural languages on code suggestion. Natural language is considered a social bias in the field of NLP, and this bias could impact the diversity of software engineering. To address this gap, we conducted an empirical study to investigate the effect of three popular natural languages (English, Japanese, and Chinese) on Copilot. We used 756 questions of varying difficulty levels from AtCoder contests for evaluation purposes. The results highlight that the capability varies across natural languages, with Chinese achieving the worst performance. Furthermore, regardless of the type of natural language, the performance decreases significantly as the difficulty of questions increases. Our work represents the initial step in comprehending the significance of natural languages in Copilot’s capability and introduces promising opportunities for future endeavors.