👤 Summary

Machine Learning and AI professional with 7+ years of industry and 5 years of academic experience, strong production expertise with a research foundation in Explainable AI from KTH. Projects include: nationwide fraud detection, taxi ride estimation models, and real-time media analytics platforms.


💼 Work Experience

🎙️ Senior Machine Learning Engineer

All Ears AB | Stockholm, Sweden 🇸🇪
October 2025 – Present

I am involved in several ML projects, including optimizing automatic speech recognition (ASR) models; developing multimodal text, audio, and video embedding models; and detecting real-time events. Ranking and recommendation systems that prioritize high-signal brand mentions based on user preferences. As an individual contributor, developed a survival-analysis model to predict time-to-churn and customer retention.

Tech stack: Parakeet, VLM2Vec-V2, PyTorch, and Hugging Face Transformers
Supervisor: Fredrik Olsson


🤖 Senior Machine Learning Engineer

Capgemini | Stockholm, Sweden 🇸🇪
October 2024 – August 2025

I was a member of the Generative AI and Analytics (GAIA) Team, where I developed graph-based machine learning, AI, and RAG solutions to detect fraudulent organizations and anomalous supplier payments.

Tech stack: Neo4j, PyTorch, Llama, LangChain, Docker, DeepSeek-V3, and Dagster
Supervisor: Tobias Fasth


🔬 Machine Learning

KTH Royal Institute of Technology | Stockholm, Sweden 🇸🇪
January 2019 – September 2024

I developed machine learning models to automatically predict truck-part maintenance needs (with Scania), built models for personalized medication recommendations based on patient history (with AstraZeneca), and evaluated large language models (LLMs) on complex reasoning tasks.

Tech stack: PyTorch, PyTorch Geometric, Scikit-learn, Statsmodels, Python
Supervisor: Henrik Boström


🎵 Machine Learning Research Intern

Spotify | Stockholm, Sweden 🇸🇪
May 2020 – September 2020

I developed ranking and relevance algorithms for a TensorFlow-based search and recommendation model on Google Cloud Platform (GCP), enabling accurate prediction of relevant songs and podcasts while making the model explainable.

Tech stack: TensorFlow, Google BigTable, Python
Supervisors: Mounia Lalmas, Judith Bütepage


📊 Data Scientist & Team Lead (Consultant)

Iteam Solutions AB | Stockholm, Sweden 🇸🇪
October 2015 – October 2018

Developed an accurate taxi-fare estimation model for Taxi Stockholm 🚕. Categorized unemployed individuals into similar groups, saving case officers hours of manual work each week (with TRR Trygghetsrådet).

Tech stack: TensorFlow and scikit-learn models on Google Cloud Platform (GCP) and Microsoft Azure
Supervisor: Christian Landgren


📈 Data Engineer and Scientist

Gapminder | Stockholm, Sweden 🇸🇪
March 2013 – September 2015

Developed Python data pipelines and D3.js visualizations to tell data-driven stories about countries’ social development, deployed on Microsoft Azure: https://www.gapminder.org/tools/ 🌍

Tech stack: Python, D3.js
Supervisors: Hans Rosling, Ola Rosling


🎓 Education

🎓 Ph.D., Machine Learning

KTH Royal Institute of Technology | Stockholm 🇸🇪
2019 – 2023

📚 MSc., Information Technology

University of Jyväskyla | Finland 🇫🇮
2011 – 2014

🔢 BSc., Applied Mathematics

Kharazmi University | Tehran, Iran 🇮🇷
2008 – 2011


📄 Scientific Publications

  1. Rahnama, A., Geurts, P., & Boström, H. (2024). Faithfulness of Local Explanations for Ensemble Models. 27th International Conference on Discovery Science.

  2. Rahnama, A., Bütepage, J., & Boström, H. (2024). Listwise Explanations of LambdaMART. The 2nd XAI World Conference.

  3. Rahnama, A., Bütepage, J., & Boström, H. (2024). Pointwise Explanations of LambdaMART. 14th Scandinavian Conference on AI.

  4. Rahnama, A., Alfaro, L.G., Wang, Z., & Movin, M. (2024). Local Interpretable Model-Agnostic Explanations for Neural Ranking Models. 14th Scandinavian Conference on AI.

  5. Rahnama, A. (2023). The Blame Problem in Evaluating Local Explanations, and How to Tackle It. XAI Workshop at ECAI Conference.

  6. Rahnama, A., Bütepage, J., Geurts, P., & Boström, H. (2023). Can Local additive explanations explain Linear Additive Models? ECML Conference.

  7. Rahnama, A., & Boström, H. (2019). A study of data and label shift in the LIME framework. NeurIPS Workshop in Human-Centric ML.

  8. Rahnama, A., & Toloo, M. (2018). An LP-based hyperparameter optimization model for language modeling. Journal of Supercomputing.

  9. Rahnama, A. (2014). Distributed real-time sentiment analysis for big data social streams. 2nd International Conference on Control, Decision and Information.