Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
AI News & Insights Featured Image

A Reproducibility Study of Product-side Fairness in Bundle Recommendationcs.AI updates on arXiv.orgon July 22, 2025 at 4:00 am

A Reproducibility Study of Product-side Fairness in Bundle Recommendationcs.AI updates on arXiv.orgon July 22, 2025 at 4:00 am arXiv:2507.14352v1 Announce Type: cross
Abstract: Recommender systems are known to exhibit fairness issues, particularly on the product side, where products and their associated suppliers receive unequal exposure in recommended results. While this problem has been widely studied in traditional recommendation settings, its implications for bundle recommendation (BR) remain largely unexplored. This emerging task introduces additional complexity: recommendations are generated at the bundle level, yet user satisfaction and product (or supplier) exposure depend on both the bundle and the individual items it contains. Existing fairness frameworks and metrics designed for traditional recommender systems may not directly translate to this multi-layered setting. In this paper, we conduct a comprehensive reproducibility study of product-side fairness in BR across three real-world datasets using four state-of-the-art BR methods. We analyze exposure disparities at both the bundle and item levels using multiple fairness metrics, uncovering important patterns. Our results show that exposure patterns differ notably between bundles and items, revealing the need for fairness interventions that go beyond bundle-level assumptions. We also find that fairness assessments vary considerably depending on the metric used, reinforcing the need for multi-faceted evaluation. Furthermore, user behavior plays a critical role: when users interact more frequently with bundles than with individual items, BR systems tend to yield fairer exposure distributions across both levels. Overall, our findings offer actionable insights for building fairer bundle recommender systems and establish a vital foundation for future research in this emerging domain.

 arXiv:2507.14352v1 Announce Type: cross
Abstract: Recommender systems are known to exhibit fairness issues, particularly on the product side, where products and their associated suppliers receive unequal exposure in recommended results. While this problem has been widely studied in traditional recommendation settings, its implications for bundle recommendation (BR) remain largely unexplored. This emerging task introduces additional complexity: recommendations are generated at the bundle level, yet user satisfaction and product (or supplier) exposure depend on both the bundle and the individual items it contains. Existing fairness frameworks and metrics designed for traditional recommender systems may not directly translate to this multi-layered setting. In this paper, we conduct a comprehensive reproducibility study of product-side fairness in BR across three real-world datasets using four state-of-the-art BR methods. We analyze exposure disparities at both the bundle and item levels using multiple fairness metrics, uncovering important patterns. Our results show that exposure patterns differ notably between bundles and items, revealing the need for fairness interventions that go beyond bundle-level assumptions. We also find that fairness assessments vary considerably depending on the metric used, reinforcing the need for multi-faceted evaluation. Furthermore, user behavior plays a critical role: when users interact more frequently with bundles than with individual items, BR systems tend to yield fairer exposure distributions across both levels. Overall, our findings offer actionable insights for building fairer bundle recommender systems and establish a vital foundation for future research in this emerging domain. Read More 

Insights News
AI News & Insights Featured Image

New to LLMs? Start Here Towards Data Scienceon May 23, 2025 at 7:51 pm

New to LLMs? Start Here Towards Data Scienceon May 23, 2025 at 7:51 pm A guide to Agents, LLMs, RAG, Fine-tuning, LangChain with practical examples to start building
The post New to LLMs? Start Here  appeared first on Towards Data Science.

 A guide to Agents, LLMs, RAG, Fine-tuning, LangChain with practical examples to start building
The post New to LLMs? Start Here  appeared first on Towards Data Science. Read More 

Insights News
AI News & Insights Featured Image

Estimating Product-Level Price Elasticities Using Hierarchical BayesianTowards Data Scienceon May 23, 2025 at 11:58 pm

Estimating Product-Level Price Elasticities Using Hierarchical BayesianTowards Data Scienceon May 23, 2025 at 11:58 pm Using one model to personalize ML results
The post Estimating Product-Level Price Elasticities Using Hierarchical Bayesian appeared first on Towards Data Science.

 Using one model to personalize ML results
The post Estimating Product-Level Price Elasticities Using Hierarchical Bayesian appeared first on Towards Data Science. Read More 

Insights News
AI News & Insights Featured Image

How to Evaluate LLMs and Algorithms — The Right WayTowards Data Scienceon May 23, 2025 at 2:02 pm

How to Evaluate LLMs and Algorithms — The Right WayTowards Data Scienceon May 23, 2025 at 2:02 pm Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more. Subscribe today! All the hard work it takes to integrate large language models and powerful algorithms into your workflows can go to waste if the outputs you see don’t live up to expectations.
The post How to Evaluate LLMs and Algorithms — The Right Way appeared first on Towards Data Science.

 Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more. Subscribe today! All the hard work it takes to integrate large language models and powerful algorithms into your workflows can go to waste if the outputs you see don’t live up to expectations.
The post How to Evaluate LLMs and Algorithms — The Right Way appeared first on Towards Data Science. Read More 

News
AI News & Insights Featured Image

Do More with NumPy Array Type Hints: Annotate & Validate Shape & DtypeTowards Data Scienceon May 23, 2025 at 6:43 pm

Do More with NumPy Array Type Hints: Annotate & Validate Shape & DtypeTowards Data Scienceon May 23, 2025 at 6:43 pm Improve static analysis and run-time validation with full generic specification
The post Do More with NumPy Array Type Hints: Annotate & Validate Shape & Dtype appeared first on Towards Data Science.

 Improve static analysis and run-time validation with full generic specification
The post Do More with NumPy Array Type Hints: Annotate & Validate Shape & Dtype appeared first on Towards Data Science. Read More 

News
AI News & Insights Featured Image

Prototyping Gradient Descent in Machine LearningTowards Data Scienceon May 24, 2025 at 1:12 am

Prototyping Gradient Descent in Machine LearningTowards Data Scienceon May 24, 2025 at 1:12 am Mathematical theorem and credit transaction prediction using Stochastic / Batch GD
The post Prototyping Gradient Descent in Machine Learning appeared first on Towards Data Science.

 Mathematical theorem and credit transaction prediction using Stochastic / Batch GD
The post Prototyping Gradient Descent in Machine Learning appeared first on Towards Data Science. Read More