About Me
Welcome! I am Atish Kumar Dipongkor, a Ph.D. candidate in Computer Science at the University of Central Florida, where I work under the supervision of Dr. Kevin Moran in the Software Automation, Generation, and Engineering (SAGE) Lab. My research lies at the intersection of Artificial Intelligence and Software Engineering, with a focus on enhancing the interpretability of Large Language Models (LLMs) for code-related tasks such as bug triaging, code generation, and authorship attribution. While LLMs demonstrate immense potential in automating software engineering workflows, their black-box nature limits their trustworthiness and adoption. My goal is to make these models more transparent, accountable, and aligned with human reasoning by advancing Explainable AI (XAI) techniques tailored to software engineering applications.
Before beginning my Ph.D., I accumulated over five years of industry experience as a software engineer and three years as a faculty member, experiences that ground my academic work in real-world challenges. I hold an M.S. and B.S. in Software Engineering from the University of Dhaka. My research contributions have been recognized at venues such as ICSE, ASE and IEEE Access, and I have received honors including the Microsoft Most Valuable Professional Award and national hackathon championships.
Feel free to connect with me through LinkedIn, explore my research on Google Scholar, or check out my projects on GitHub.
News
- November 2024: I won 1st place in the ACM Student Research Competition at ASE'24!
- September 2024: Paper accepted at ASE'24 Student Research Competition track!
- September 2024: ACM SIGSOFT CAPS application approved for attending ASE'24!
- April 2024: Paper accepted at ICSE'24 Student Research Competition track!
- July 2023: One technical paper on Neural Bug Triaging accepted to ASE'23
Research
As a PhD student working at the intersection of artificial intelligence and software engineering, my research seeks to enhance the interpretability of Large Language Models (LLMs) applied to code. While LLMs show immense promise in automating tasks such as bug triaging, code generation, and authorship attribution, their black-box nature limits their trustworthiness and adoption. My goal is to make these models more transparent and accountable by applying and advancing explainable AI (XAI) techniques tailored for code-related tasks.
The desired impact of my research on the field is twofold: (1) to develop principled methods for interpreting LLMs in software engineering tasks, helping researchers understand why models behave the way they do, and (2) to establish evaluation frameworks that go beyond accuracy—measuring faithfulness, human-alignment, and debuggability of model predictions. This contributes not only to interpretability research but also to the reliability and safety of AI-assisted software development workflows.
On a broader scale, the societal impact I aim for is increased developer trust in AI tools, leading to wider and more responsible adoption of LLMs in real-world software projects. Transparent AI systems can improve collaboration between humans and machines, reduce the risk of erroneous predictions going unnoticed, and ensure developers remain in control — crucial factors in high-stakes applications such as security, critical infrastructure, and open-source ecosystems.
My work is grounded in two recent studies: one on bug triaging, where LLMs predict the responsible team for incoming reports, and another on code authorship attribution. In both, I applied XAI methods like Integrated Gradients to probe model behavior, revealing both meaningful signals and troubling artifacts. These insights underscore the urgent need for interpretability in applied AI.
This line of research is personally motivated by my experience working on large software maintenance teams, where I saw firsthand how opaque automation could erode trust and adoption. The Google PhD Fellowship would enable me to deepen this work and build tools that make LLMs more understandable, ethical, and useful — advancing not just the science of AI, but its human impact.