Hello I am, Dimitris Gkoumas

AI Researcher

My research draws on topics of Generative AI, Multimodal Representation Learning, Computational Linguistics (NLP), and Quantum Information Theory (specifically for deploying theoretical frameworks).

About Me

I am, Dimitris Gkoumas

My research has drawn on topics including Generative AI, Multimodal Representation Learning, Computational Linguistics (NLP), and Quantum Information Theory, specifically for representation learning. I have worked on a variety of applied AI and NLP tasks such as alignment between LLMs and humans, self-evaluation ecosystems for LLMs, cross-linguistics for molecule representation learning, multimodal learning for linguistic+visual+acoustic modalities, discourse coherence, automatic labeling, text-to-concept generation, and semantics. Additionally, I utilized computational models and incorporated the mathematical formalism of Quantum Theory to model cognition in human language understanding.
My research interests lie in the fields of human-centric, geometric, and evolutionary representation learning, human-preference LLM-judge evaluation ecosystems, as well as unaligned multimodal representation learning. The overarching goal of my research has been to discover inventive scientific solutions that are scalable, flexible, and inexpensive, particularly to tackle unprecedented challenges in areas such as climate change, healthcare, and pandemics. To address these challenges, integration of AI with modules to enhance the existing capabilities of LLMs and overcome fallacies when exposed to real-world settings has been crucial.

My Research Interests

Generative AI

Aligning Large Language Models (LLMs) with human preferences and values for enhanced integration into real-world AI applications.

Evaluation

Formulating assessment frameworks for generative AI.

AI & Life Sciences

Exploring the potential of generative AI in Life Sciences to address the most formidable health challenges of our generation and contribute to the treatment of incurable diseases.

Multimodal Representation Learning

Research LLMs for multimodal representation learning to solve tasks spanning from agents for decision-making to medicine and robotics.

Geometric Representation Learning

Researching the mathematical frameworks for developing new computational learning algorithms.

Research Projects


Creating time sensitive sensors from language & heterogeneous user generated content Jan 2023 -
Awarded by UKRI/EPSRC Turing AI Fellowship (Grant: EP/V030302/1) Postdoc

Memory Safe Trustworthy Programming Languages Jun 2021 - Dec 2023
Awarded by Huawei, Ireland Research Centre AI Consultant

Quantum Information Access and Retrieval Theory (QUARTZ) Oct 2017- Sep 2020
Awarded by EXCELLENT SCIENCE - Marie Curie Actions. (No: 721321) Early stage researcher

EDUWORKS: An EU-wide investigation of labour market matching processes Jan 2016- Sep 2017
Awarded by European Commission - Marie Curie ITN - FP7-PEOPLE- 2012-ITN (No. 608311) Early stage researcher

Research Experience


Queen Mary University of London, London, UK Jan 2021 -
Worked on centred around representation alignment, leveraging human preference data algorithms for aligning Large Language Models (LLMs) with human preferences and values, LLMs functioning as self-judge ecosystems for evaluation purposes, and evolutionary dynamics within longitudinal linguistics, with a particular focus on appli- cations in mental health and life sciences.
Role: Postdoc, Advisor: Prof. Maria Liakata

Huawei Ireland Research Center, Dublin, Ireland Jun 2021 - Dec 2023
Worked on semantic programming language representation utilizing computational models and learning-based tech- niques to capture the meaning of code in an explicit and structured manner with applications in program analysis and compiler design.
Role: AI Consultant, Advisor: Dr. Yijun Yu

University of Montreal, Montreal, Canada Jan 2020 - Apr 2020
Worked on quantum-inspired models for conversational emotion recognition
Role: Visiting researcher, Advisor: Prof. Jian-yun Nie

University of Copenhagen , Copenhagen, Denmark Aug 2019 - Dec 2020
Worked on multimodal models for sentiment analysis
Role: Visiting researcher, Advisor: Prof. Christina Lioma

University of Padua , Padua, Italy Jan 2019 - Mar 2019
Worked on tensor-based fusion representation learning
Role: Visiting researcher, Advisor: Prof. Massimo Melucci

Tianjin University , Tianjin, China Oct 2017 - Dec 2019
Worked on information retrieval
Role: Visiting researcher, Advisor: Prof. Dawei Song

The Open University, Milton Keynes, UK Oct 2017 - Mar 2021
Thesis: Quantum Cognitively Motivated Context-Aware Multimodal Representation Learning for Human Lan- guage Analysis
Role: Early-stage Researcher (ESR), Advisor: Prof. Dawei Song

Corvinno Technology Transfer Centre, Budapest, Hungary Jan 2016 - Sep 2017
Engaged in ontology development to improve language adaptation and evolution by creating structured representa- tions of semantic changes and capturing semantic relationships with applications in domains such as education and policy making.
Role: Early-stage Researcher (ESR), Advisor: Prof. Andras Gabor

Selected Publications

*Gkoumas, D., Wang, B., Tsakalidis, A., Wolters, M., Zubiaga, A., Purver, M., and Liakata, M. (2024). A longitudinal multi-modal dataset for dementia monitoring and diagnosis. Language Resources and Evaluation, Springer Nature. . [Paper]
*Gkoumas, D., Purver, M. & Liakata, M. (2023). Reformulating NLP tasks to Capture Longitudinal Manifestation of Language Disorders in People with Dementia. EMNLP. [Paper]
* Gkoumas, D., Tsakalidis, A. & Liakata, M. (2023). A Digital Language Coherence Marker for Monitoring Dementia. EMNLP. [Paper]
*Gkoumas, D., Li, Q., Yu, Y., & Song, D. (2021). An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis. IJCAI. [Paper]
*Gkoumas, D., Li, Q., Dehdashti, S., Melucci, M., & Song, D. (2021). A Quantum Cognitively Motivated Decision Fusion Framework for Video Sentiment Analysis. AAAI. [Paper]
*Li, Q., Gkoumas, D., Sordoni, A., Nie, J.Y. & Melucci M. (2021). Quantum-inspired Network for Conversational Emotion Recognition. AAAI. [Paper]
*Gkoumas, D., Li, Q., Lioma, C., Yu, Y., & Song, D. (2020). What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis. Information Fusion. [Paper]
*Li, Q., Gkoumas, D., Lioma, C., & Melucci, M. (2020). Quantum-inspired multimodal fusion for video sentiment analysis. Information Fusion. [Paper]
* Vas, R., Weber, C., & Gkoumas, D. (2018). Implementing connectivism by semantic technologies for self-directed learning. International Journal of Manpower. [Paper]

Blog

Let's Keep in Touch

Social Media
Google Scholar