‘My Research Has Evolved into A Broader and More Encompassing Vision’
Seungmin Jin, from South Korea, is researching the field of Explainable AI and planning to defend his PhD on ‘A Visual Analytics System for Explaining and Improving Attention-Based Traffic Forecasting Models’ at HSE University this year. In September, he passed the pre-defence procedure at the HSE Faculty of Computer Science School of Data Analysis and Artificial Intelligence. In his interview for the HSE News Service, he talks about his academic path and plans for the future.
I'm Seungmin Jin, originally from South Korea and currently based in Moscow. I'm a doctoral candidate and researcher specialising in Explainable AI (XAI), with a particular focus on enhancing the interpretability of deep learning models. My journey began in the field of traffic forecasting, where I applied XAI principles to improve model performance. However, my expertise extends beyond traffic, and I'm passionate about making AI more transparent and interpretable across various domains.
My collaboration with HSE in pursuit of my PhD research was a well-considered choice. It was initiated when I recognised HSE's strong reputation in data analytics and its commitment to advancing XAI.
This alignment with my research interests led me to explore collaboration possibilities. As I delved into my studies, my research evolved from traffic forecasting to a broader focus on XAI, driven by the desire to make AI models more interpretable and trustworthy. My work with HSE has been instrumental in this evolution, providing invaluable resources and an environment conducive to innovative research.
The primary focus of my research revolves around the advancement of Explainable Artificial Intelligence (XAI), a critical field in modern AI research. While my initial work was centred on enhancing the interpretability of deep learning models within the context of traffic forecasting, my research has since evolved into a broader and more encompassing vision.
At its core, my research is driven by the need to address the inherent black-box nature of complex AI models, particularly deep learning models. These models, while highly powerful and capable of making accurate predictions, often lack transparency. Stakeholders, including domain experts and end-users, are frequently left in the dark when it comes to understanding why these models make specific decisions or predictions.
To bridge this gap, I have developed a novel Visual Analytics system, known as AttnAnalyzer, which serves as a pioneering solution to unravel the inner workings of deep learning models. This system enables users to explore the decision-making processes of these models in a highly interactive and intuitive manner. By visualising attention distributions and uncovering the intricate dependencies captured by the model, AttnAnalyzer provides a comprehensive view of how AI decisions are made.
My research not only involves the creation and refinement of this Visual Analytics system, but also its application across various domains.
I aim to empower stakeholders, whether they are in traffic management, healthcare, finance, or other sectors, to gain deeper insights into AI-driven decisions. My work ensures that these decisions are not only accurate, but also understandable and trustworthy.
In essence, my research in Explainable AI transcends the boundaries of specific applications, encompassing a broader mission to democratise AI understanding. It is about making AI more accessible to all stakeholders, regardless of their technical expertise, and fostering a sense of trust and accountability in AI systems across diverse domains.
In terms of my future professional plans, I plan to apply for a position at HSE for further research. I'm deeply committed to the field of Explainable AI (XAI). While I've made significant contributions in the context of traffic forecasting, I aspire to apply XAI principles across diverse domains.
My vision is to collaborate with both academia and industry to develop cutting-edge XAI solutions that address the black-box nature of deep learning models.
I see myself contributing to the broader adoption of transparent and interpretable AI in critical areas such as healthcare diagnostics, financial risk assessment, and beyond. Ultimately, my aim is to drive the responsible and ethical use of AI by making its decision-making processes more understandable and accessible to all stakeholders.
While my work has involved significant collaboration with my Korean colleagues, I must acknowledge that without the guidance and support of my HSE-based supervisor, Professor Attila Kertesz-Farkas, I wouldn't have been able to complete this research.
I wish to express my deep gratitude to Professor Attila Kertesz-Farkas for his invaluable guidance and insights throughout my research journey. Under his leadership, the AIC Lab has been at the forefront of developing cutting-edge deep learning technologies for mass spectrometry data analysis in the fields of life sciences and biomedical applications.
See also:
Doctoral Student Explores the Challenges Faced by International PhD Seekers During the Pandemic
In late June 2024, a pre-defence of Nurudeen Abdul-Rahaman’s dissertation took place at the HSE Institute of Education. Nurudeen Abdul-Rahaman, a doctoral student from Ghana, has presented his dissertation ‘Academic and Social Integration of Foreign Doctoral Students at Russian Universities during the Covid-19 Pandemic’ for the degree of Candidate of Sciences in Education (PhD).The HSE News Service spoke with Nurudeen as well as his academic supervisor, Evgeniy Terentev, Director of the Institute of Education, about their extensive research on international doctoral students in Russia and Nurudeen's contribution to this research.
‘We Cannot Understand the Modern Ideological Confrontation without the Accusations that Emerged during the Lausanne Process’
Rainer Matos Franco, from Mexico, defended his PhD thesis with honours at HSE University this June. In his dissertation, Rainer Matos Franco examines the history of anticommunism in Europe during the 1920s. The HSE News Service spoke with Rainer and his academic supervisor, Tatiana Borisova, about the significance of the Lausanne Process for the Cold War and contemporary history, the opportunities provided by HSE University for international PhD candidates, and the challenges of working with a vast database of historical sources.
‘In the Future, I Expect Rapid Development of Professions Related to Prompt Engineering’
The English-language programme of HSE Online ‘Master of Computer Vision’ will change its name to ‘Artificial Intelligence and Computer Vision’ in 2024. Andrey Savchenko, the programme academic supervisor, shares how the new name will affect the programme semantics, why AI has become the main federal trend in the field of information technology, and what tasks graduates will solve.
Artificial Intelligence as a Driver of Digital Transformation
In December, the HSE Institute for Statistical Studies and Economics of Knowledge and the HSE AI Research Centre participated in UNCTAD eWeek to discuss the future of the emerging digital economy. One of the topics discussed during the conference was artificial intelligence and its applications in driving the digital transformation of industry sectors. The session was co-organised by HSE University.
HSE University Receives Highest Grant under Priority 2030 Programme
HSE University has proved its leading position in the first group of the ‘Research Leadership’ field under the Priority 2030 programme. The university has also received the highest grant for teaching digital competencies to students, demonstrating its educational leadership in the fields of digital technologies and AI.
‘The Future Lies with AI Technologies and HSE University Understands That’
At the AI Journey 2023 international conference in Moscow, a ranking of Russian universities that train the best AI specialists was published. HSE University entered the A+ leadership group, taking first place according to such criteria as ‘Demand for hiring graduates’, ‘Quality of educational environment’, and ‘Activities for the development of school education’. Ivan Arzhantsev, Dean of HSE University’s Faculty of Computer Science, spoke to the HSE News Service about how AI specialists are trained at HSE University and what plans the university has in this area.
‘Every Article on NeurIPS Is Considered a Significant Result’
Staff members of the HSE Faculty of Computer Science will present 12 of their works at the 37th Conference and Workshop on Neural Information Processing Systems (NeurIPS), one of the most significant events in the field of artificial intelligence and machine learning. This year it will be held on December 10–16 in New Orleans (USA).
Specialists from the HSE Institute of Education Confirm GigaChat’s Erudition in Social Sciences
A multimodal neural network model by Sber, under the supervision of HSE University’s expert commission, has successfully passed the Unified State Exam in social studies. GigaChat completed all exam tasks and scored 67 points.
HSE University Students Win in the AIJ Science Competition at AI Journey 2023
The International Sber Conference of Artificial Intelligence, ‘AI Journey 2023’ recently took place in Moscow. Alexander Rogachev, doctoral student of the HSE Faculty of Computer Science, and Egor Egorov, an HSE 4th-year undergraduate student became the winners of the AIJ Science competition for scientific articles on artificial intelligence that was held as part of the event. The research was carried out under the umbrella of the HSE's Laboratory of Methods for Big Data Analysis (LAMBDA).
HSE University Hosts Fall into ML 2023 Conference on Machine Learning
Over three days, more than 300 conference participants attended workshops, seminars, sections and a poster session. During panel discussions, experts deliberated on the regulation of artificial intelligence (AI) technologies and considered collaborative initiatives between academic institutions and industry to advance AI development through megaprojects.