A Product Data Scientist’s Take on LinkedIn Games After 500 Days of PlayTowards Data Science What a simple puzzle game reveals about experimentation, product thinking, and data science
The post A Product Data Scientist’s Take on LinkedIn Games After 500 Days of Play appeared first on Towards Data Science.
What a simple puzzle game reveals about experimentation, product thinking, and data science
The post A Product Data Scientist’s Take on LinkedIn Games After 500 Days of Play appeared first on Towards Data Science. Read More
Pixi: A Smarter Way to Manage Python EnvironmentsKDnuggets Pixi makes python environment management simple, consistent, and portable.
Pixi makes python environment management simple, consistent, and portable. Read More
YOLOv1 Paper Walkthrough: The Day YOLO First Saw the WorldTowards Data Science A detailed walkthrough of the YOLOv1 architecture and its PyTorch implementation from scratch
The post YOLOv1 Paper Walkthrough: The Day YOLO First Saw the World appeared first on Towards Data Science.
A detailed walkthrough of the YOLOv1 architecture and its PyTorch implementation from scratch
The post YOLOv1 Paper Walkthrough: The Day YOLO First Saw the World appeared first on Towards Data Science. Read More
Top 5 Small AI Coding Models That You Can Run LocallyKDnuggets This article is for vibe coders and developers seeking private, fast, and affordable AI coding solutions.
This article is for vibe coders and developers seeking private, fast, and affordable AI coding solutions. Read More
UK and Germany plan to commercialise quantum supercomputingAI News The UK and Germany plan to integrate their science sectors to accelerate the commercialisation of quantum supercomputing technology. Announced on the final day of the German president’s state visit, these joint commitments target the gap between R&D and enterprise application in computing, sensing, and timing. The partnership involves specific funding to fast-track product development and
The post UK and Germany plan to commercialise quantum supercomputing appeared first on AI News.
The UK and Germany plan to integrate their science sectors to accelerate the commercialisation of quantum supercomputing technology. Announced on the final day of the German president’s state visit, these joint commitments target the gap between R&D and enterprise application in computing, sensing, and timing. The partnership involves specific funding to fast-track product development and
The post UK and Germany plan to commercialise quantum supercomputing appeared first on AI News. Read More
Aluminium OS is the AI-powered successor to ChromeOSAI News The convergence of mobile and desktop operating systems is a goal that has remained elusive for big tech firms since the early days of the smartphone. Microsoft’s attempt in the form of Windows Mobile was reaching the end of its road by 2010, and despite Apple’s iOS/iPadOS and macOS moving very slowly towards one another
The post Aluminium OS is the AI-powered successor to ChromeOS appeared first on AI News.
The convergence of mobile and desktop operating systems is a goal that has remained elusive for big tech firms since the early days of the smartphone. Microsoft’s attempt in the form of Windows Mobile was reaching the end of its road by 2010, and despite Apple’s iOS/iPadOS and macOS moving very slowly towards one another
The post Aluminium OS is the AI-powered successor to ChromeOS appeared first on AI News. Read More
Single-Round Scalable Analytic Federated Learningcs.AI updates on arXiv.org arXiv:2512.03336v1 Announce Type: cross
Abstract: Federated Learning (FL) is plagued by two key challenges: high communication overhead and performance collapse on heterogeneous (non-IID) data. Analytic FL (AFL) provides a single-round, data distribution invariant solution, but is limited to linear models. Subsequent non-linear approaches, like DeepAFL, regain accuracy but sacrifice the single-round benefit. In this work, we break this trade-off. We propose SAFLe, a framework that achieves scalable non-linear expressivity by introducing a structured head of bucketed features and sparse, grouped embeddings. We prove this non-linear architecture is mathematically equivalent to a high-dimensional linear regression. This key equivalence allows SAFLe to be solved with AFL’s single-shot, invariant aggregation law. Empirically, SAFLe establishes a new state-of-the-art for analytic FL, significantly outperforming both linear AFL and multi-round DeepAFL in accuracy across all benchmarks, demonstrating a highly efficient and scalable solution for federated vision.
arXiv:2512.03336v1 Announce Type: cross
Abstract: Federated Learning (FL) is plagued by two key challenges: high communication overhead and performance collapse on heterogeneous (non-IID) data. Analytic FL (AFL) provides a single-round, data distribution invariant solution, but is limited to linear models. Subsequent non-linear approaches, like DeepAFL, regain accuracy but sacrifice the single-round benefit. In this work, we break this trade-off. We propose SAFLe, a framework that achieves scalable non-linear expressivity by introducing a structured head of bucketed features and sparse, grouped embeddings. We prove this non-linear architecture is mathematically equivalent to a high-dimensional linear regression. This key equivalence allows SAFLe to be solved with AFL’s single-shot, invariant aggregation law. Empirically, SAFLe establishes a new state-of-the-art for analytic FL, significantly outperforming both linear AFL and multi-round DeepAFL in accuracy across all benchmarks, demonstrating a highly efficient and scalable solution for federated vision. Read More
Prior preferences in active inference agents: soft, hard, and goal shapingcs.AI updates on arXiv.org arXiv:2512.03293v1 Announce Type: new
Abstract: Active inference proposes expected free energy as an objective for planning and decision-making to adequately balance exploitative and explorative drives in learning agents. The exploitative drive, or what an agent wants to achieve, is formalised as the Kullback-Leibler divergence between a variational probability distribution, updated at each inference step, and a preference probability distribution that indicates what states or observations are more likely for the agent, hence determining the agent’s goal in a certain environment. In the literature, the questions of how the preference distribution should be specified and of how a certain specification impacts inference and learning in an active inference agent have been given hardly any attention. In this work, we consider four possible ways of defining the preference distribution, either providing the agents with hard or soft goals and either involving or not goal shaping (i.e., intermediate goals). We compare the performances of four agents, each given one of the possible preference distributions, in a grid world navigation task. Our results show that goal shaping enables the best performance overall (i.e., it promotes exploitation) while sacrificing learning about the environment’s transition dynamics (i.e., it hampers exploration).
arXiv:2512.03293v1 Announce Type: new
Abstract: Active inference proposes expected free energy as an objective for planning and decision-making to adequately balance exploitative and explorative drives in learning agents. The exploitative drive, or what an agent wants to achieve, is formalised as the Kullback-Leibler divergence between a variational probability distribution, updated at each inference step, and a preference probability distribution that indicates what states or observations are more likely for the agent, hence determining the agent’s goal in a certain environment. In the literature, the questions of how the preference distribution should be specified and of how a certain specification impacts inference and learning in an active inference agent have been given hardly any attention. In this work, we consider four possible ways of defining the preference distribution, either providing the agents with hard or soft goals and either involving or not goal shaping (i.e., intermediate goals). We compare the performances of four agents, each given one of the possible preference distributions, in a grid world navigation task. Our results show that goal shaping enables the best performance overall (i.e., it promotes exploitation) while sacrificing learning about the environment’s transition dynamics (i.e., it hampers exploration). Read More
Passive scan data goes stale fast as cloud assets shift daily, leaving teams blind to real exposures. Sprocket Security shows how continuous, automated recon gives accurate, up-to-date attack surface visibility. […] Read More
Context-Aware Hierarchical Learning: A Two-Step Paradigm towards Safer LLMscs.AI updates on arXiv.org arXiv:2512.03720v1 Announce Type: cross
Abstract: Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces critical vulnerabilities in instruction handling, particularly when exposed to adversarial scenarios. In this work, we identify and propose a novel class of vulnerabilities, termed Tool-Completion Attack (TCA), which exploits function-calling mechanisms to subvert model behavior. To evaluate LLM robustness against such threats, we introduce the Tool-Completion benchmark, a comprehensive security assessment framework, which reveals that even state-of-the-art models remain susceptible to TCA, with surprisingly high attack success rates. To address these vulnerabilities, we introduce Context-Aware Hierarchical Learning (CAHL), a sophisticated mechanism that dynamically balances semantic comprehension with role-specific instruction constraints. CAHL leverages the contextual correlations between different instruction segments to establish a robust, context-aware instruction hierarchy. Extensive experiments demonstrate that CAHL significantly enhances LLM robustness against both conventional attacks and the proposed TCA, exhibiting strong generalization capabilities in zero-shot evaluations while still preserving model performance on generic tasks. Our code is available at https://github.com/S2AILab/CAHL.
arXiv:2512.03720v1 Announce Type: cross
Abstract: Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces critical vulnerabilities in instruction handling, particularly when exposed to adversarial scenarios. In this work, we identify and propose a novel class of vulnerabilities, termed Tool-Completion Attack (TCA), which exploits function-calling mechanisms to subvert model behavior. To evaluate LLM robustness against such threats, we introduce the Tool-Completion benchmark, a comprehensive security assessment framework, which reveals that even state-of-the-art models remain susceptible to TCA, with surprisingly high attack success rates. To address these vulnerabilities, we introduce Context-Aware Hierarchical Learning (CAHL), a sophisticated mechanism that dynamically balances semantic comprehension with role-specific instruction constraints. CAHL leverages the contextual correlations between different instruction segments to establish a robust, context-aware instruction hierarchy. Extensive experiments demonstrate that CAHL significantly enhances LLM robustness against both conventional attacks and the proposed TCA, exhibiting strong generalization capabilities in zero-shot evaluations while still preserving model performance on generic tasks. Our code is available at https://github.com/S2AILab/CAHL. Read More