Prompt Fidelity: Measuring How Much of Your Intent an AI Agent Actually ExecutesTowards Data Science
Prompt Fidelity: Measuring How Much of Your Intent an AI Agent Actually ExecutesTowards Data Science How much of your AI agent’s output is real data versus confident guesswork?
The post Prompt Fidelity: Measuring How Much of Your Intent an AI Agent Actually Executes appeared first on Towards Data Science.
How much of your AI agent’s output is real data versus confident guesswork?
The post Prompt Fidelity: Measuring How Much of Your Intent an AI Agent Actually Executes appeared first on Towards Data Science. Read More
How separating logic and search boosts AI agent scalabilityAI News Separating logic from inference improves AI agent scalability by decoupling core workflows from execution strategies. The transition from generative AI prototypes to production-grade agents introduces a specific engineering hurdle: reliability. LLMs are stochastic by nature. A prompt that works once may fail on the second attempt. To mitigate this, development teams often wrap core business
The post How separating logic and search boosts AI agent scalability appeared first on AI News.
Separating logic from inference improves AI agent scalability by decoupling core workflows from execution strategies. The transition from generative AI prototypes to production-grade agents introduces a specific engineering hurdle: reliability. LLMs are stochastic by nature. A prompt that works once may fail on the second attempt. To mitigate this, development teams often wrap core business
The post How separating logic and search boosts AI agent scalability appeared first on AI News. Read More
Intuit, Uber, and State Farm trial AI agents inside enterprise workflowsAI News The way large companies use artificial intelligence is changing. For years, AI in business meant experimenting with tools that could answer questions or help with small tasks. Now, some big enterprises are moving beyond tools to AI agents that can actually do practical work in systems and workflows. This week, OpenAI introduced a new platform
The post Intuit, Uber, and State Farm trial AI agents inside enterprise workflows appeared first on AI News.
The way large companies use artificial intelligence is changing. For years, AI in business meant experimenting with tools that could answer questions or help with small tasks. Now, some big enterprises are moving beyond tools to AI agents that can actually do practical work in systems and workflows. This week, OpenAI introduced a new platform
The post Intuit, Uber, and State Farm trial AI agents inside enterprise workflows appeared first on AI News. Read More
When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputscs.AI updates on arXiv.org arXiv:2508.03365v3 Announce Type: replace-cross
Abstract: As large language models (LLMs) become increasingly integrated into daily life, audio has emerged as a key interface for human-AI interaction. However, this convenience also introduces new vulnerabilities, making audio a potential attack surface for adversaries. Our research introduces WhisperInject, a two-stage adversarial audio attack framework that manipulates state-of-the-art audio language models to generate harmful content. Our method embeds harmful payloads as subtle perturbations into audio inputs that remain intelligible to human listeners. The first stage uses a novel reward-based white-box optimization method, Reinforcement Learning with Projected Gradient Descent (RL-PGD), to jailbreak the target model and elicit harmful native responses. This native harmful response then serves as the target for Stage 2, Payload Injection, where we use gradient-based optimization to embed subtle perturbations into benign audio carriers, such as weather queries or greeting messages. Our method achieves average attack success rates of 60-78% across two benchmarks and five multimodal LLMs, validated by multiple evaluation frameworks. Our work demonstrates a new class of practical, audio-native threats, moving beyond theoretical exploits to reveal a feasible and covert method for manipulating multimodal AI systems.
arXiv:2508.03365v3 Announce Type: replace-cross
Abstract: As large language models (LLMs) become increasingly integrated into daily life, audio has emerged as a key interface for human-AI interaction. However, this convenience also introduces new vulnerabilities, making audio a potential attack surface for adversaries. Our research introduces WhisperInject, a two-stage adversarial audio attack framework that manipulates state-of-the-art audio language models to generate harmful content. Our method embeds harmful payloads as subtle perturbations into audio inputs that remain intelligible to human listeners. The first stage uses a novel reward-based white-box optimization method, Reinforcement Learning with Projected Gradient Descent (RL-PGD), to jailbreak the target model and elicit harmful native responses. This native harmful response then serves as the target for Stage 2, Payload Injection, where we use gradient-based optimization to embed subtle perturbations into benign audio carriers, such as weather queries or greeting messages. Our method achieves average attack success rates of 60-78% across two benchmarks and five multimodal LLMs, validated by multiple evaluation frameworks. Our work demonstrates a new class of practical, audio-native threats, moving beyond theoretical exploits to reveal a feasible and covert method for manipulating multimodal AI systems. Read More
DISCOVER: Identifying Patterns of Daily Living in Human Activities from Smart Home Datacs.AI updates on arXiv.org arXiv:2503.01733v3 Announce Type: replace-cross
Abstract: Smart homes equipped with ambient sensors offer a transformative approach to continuous health monitoring and assisted living. Traditional research in this domain primarily focuses on Human Activity Recognition (HAR), which relies on mapping sensor data to a closed set of predefined activity labels. However, the fixed granularity of these labels often constrains their practical utility, failing to capture the subtle, household-specific nuances essential, for example, for tracking individual health over time. To address this, we propose DISCOVER, a framework for discovering and annotating Patterns of Daily Living (PDL) – fine-grained, recurring sequences of sensor events that emerge directly from a resident’s unique routines. DISCOVER utilizes a self-supervised feature extraction and representation-aware clustering pipeline, supported by a custom visualization interface that enables experts to interpret and label discovered patterns with minimal effort. Our evaluation across multiple smart-home environments demonstrates that DISCOVER identifies cohesive behavioral clusters with high inter-rater agreement while achieving classification performance comparable to fully-supervised baselines using only 0.01% of the labels. Beyond reducing annotation overhead, DISCOVER establishes a foundation for longitudinal analysis. By grounding behavior in a resident’s specific environment rather than rigid semantic categories, our framework facilitates the observation of within-person habitual drift. This capability positions the system as a potential tool for identifying subtle behavioral indicators associated with early-stage cognitive decline in future longitudinal studies.
arXiv:2503.01733v3 Announce Type: replace-cross
Abstract: Smart homes equipped with ambient sensors offer a transformative approach to continuous health monitoring and assisted living. Traditional research in this domain primarily focuses on Human Activity Recognition (HAR), which relies on mapping sensor data to a closed set of predefined activity labels. However, the fixed granularity of these labels often constrains their practical utility, failing to capture the subtle, household-specific nuances essential, for example, for tracking individual health over time. To address this, we propose DISCOVER, a framework for discovering and annotating Patterns of Daily Living (PDL) – fine-grained, recurring sequences of sensor events that emerge directly from a resident’s unique routines. DISCOVER utilizes a self-supervised feature extraction and representation-aware clustering pipeline, supported by a custom visualization interface that enables experts to interpret and label discovered patterns with minimal effort. Our evaluation across multiple smart-home environments demonstrates that DISCOVER identifies cohesive behavioral clusters with high inter-rater agreement while achieving classification performance comparable to fully-supervised baselines using only 0.01% of the labels. Beyond reducing annotation overhead, DISCOVER establishes a foundation for longitudinal analysis. By grounding behavior in a resident’s specific environment rather than rigid semantic categories, our framework facilitates the observation of within-person habitual drift. This capability positions the system as a potential tool for identifying subtle behavioral indicators associated with early-stage cognitive decline in future longitudinal studies. Read More
Plug-and-Play Emotion Graphs for Compositional Prompting in Zero-Shot Speech Emotion Recognitioncs.AI updates on arXiv.org arXiv:2509.25458v2 Announce Type: replace
Abstract: Large audio-language models (LALMs) exhibit strong zero-shot performance across speech tasks but struggle with speech emotion recognition (SER) due to weak paralinguistic modeling and limited cross-modal reasoning. We propose Compositional Chain-of-Thought Prompting for Emotion Reasoning (CCoT-Emo), a framework that introduces structured Emotion Graphs (EGs) to guide LALMs in emotion inference without fine-tuning. Each EG encodes seven acoustic features (e.g., pitch, speech rate, jitter, shimmer), textual sentiment, keywords, and cross-modal associations. Embedded into prompts, EGs provide interpretable and compositional representations that enhance LALM reasoning. Experiments across SER benchmarks show that CCoT-Emo outperforms prior SOTA and improves accuracy over zero-shot baselines.
arXiv:2509.25458v2 Announce Type: replace
Abstract: Large audio-language models (LALMs) exhibit strong zero-shot performance across speech tasks but struggle with speech emotion recognition (SER) due to weak paralinguistic modeling and limited cross-modal reasoning. We propose Compositional Chain-of-Thought Prompting for Emotion Reasoning (CCoT-Emo), a framework that introduces structured Emotion Graphs (EGs) to guide LALMs in emotion inference without fine-tuning. Each EG encodes seven acoustic features (e.g., pitch, speech rate, jitter, shimmer), textual sentiment, keywords, and cross-modal associations. Embedded into prompts, EGs provide interpretable and compositional representations that enhance LALM reasoning. Experiments across SER benchmarks show that CCoT-Emo outperforms prior SOTA and improves accuracy over zero-shot baselines. Read More
Beyond Fixed Frames: Dynamic Character-Aligned Speech Tokenizationcs.AI updates on arXiv.org arXiv:2601.23174v2 Announce Type: replace-cross
Abstract: Neural audio codecs are at the core of modern conversational speech technologies, converting continuous speech into sequences of discrete tokens that can be processed by LLMs. However, existing codecs typically operate at fixed frame rates, allocating tokens uniformly in time and producing unnecessarily long sequences. In this work, we introduce DyCAST, a Dynamic Character-Aligned Speech Tokenizer that enables variable-frame-rate tokenization through soft character-level alignment and explicit duration modeling. DyCAST learns to associate tokens with character-level linguistic units during training and supports alignment-free inference with direct control over token durations at decoding time. To improve speech resynthesis quality at low frame rates, we further introduce a retrieval-augmented decoding mechanism that enhances reconstruction fidelity without increasing bitrate. Experiments show that DyCAST achieves competitive speech resynthesis quality and downstream performance while using significantly fewer tokens than fixed-frame-rate codecs. Code and checkpoints will be released publicly at https://github.com/lucadellalib/dycast.
arXiv:2601.23174v2 Announce Type: replace-cross
Abstract: Neural audio codecs are at the core of modern conversational speech technologies, converting continuous speech into sequences of discrete tokens that can be processed by LLMs. However, existing codecs typically operate at fixed frame rates, allocating tokens uniformly in time and producing unnecessarily long sequences. In this work, we introduce DyCAST, a Dynamic Character-Aligned Speech Tokenizer that enables variable-frame-rate tokenization through soft character-level alignment and explicit duration modeling. DyCAST learns to associate tokens with character-level linguistic units during training and supports alignment-free inference with direct control over token durations at decoding time. To improve speech resynthesis quality at low frame rates, we further introduce a retrieval-augmented decoding mechanism that enhances reconstruction fidelity without increasing bitrate. Experiments show that DyCAST achieves competitive speech resynthesis quality and downstream performance while using significantly fewer tokens than fixed-frame-rate codecs. Code and checkpoints will be released publicly at https://github.com/lucadellalib/dycast. Read More
When AI Persuades: Adversarial Explanation Attacks on Human Trust in AI-Assisted Decision Makingcs.AI updates on arXiv.org arXiv:2602.04003v1 Announce Type: new
Abstract: Most adversarial threats in artificial intelligence target the computational behavior of models rather than the humans who rely on them. Yet modern AI systems increasingly operate within human decision loops, where users interpret and act on model recommendations. Large Language Models generate fluent natural-language explanations that shape how users perceive and trust AI outputs, revealing a new attack surface at the cognitive layer: the communication channel between AI and its users. We introduce adversarial explanation attacks (AEAs), where an attacker manipulates the framing of LLM-generated explanations to modulate human trust in incorrect outputs. We formalize this behavioral threat through the trust miscalibration gap, a metric that captures the difference in human trust between correct and incorrect outputs under adversarial explanations. By incorporating this gap, AEAs explore the daunting threats in which persuasive explanations reinforce users’ trust in incorrect predictions. To characterize this threat, we conducted a controlled experiment (n = 205), systematically varying four dimensions of explanation framing: reasoning mode, evidence type, communication style, and presentation format. Our findings show that users report nearly identical trust for adversarial and benign explanations, with adversarial explanations preserving the vast majority of benign trust despite being incorrect. The most vulnerable cases arise when AEAs closely resemble expert communication, combining authoritative evidence, neutral tone, and domain-appropriate reasoning. Vulnerability is highest on hard tasks, in fact-driven domains, and among participants who are less formally educated, younger, or highly trusting of AI. This is the first systematic security study that treats explanations as an adversarial cognitive channel and quantifies their impact on human trust in AI-assisted decision making.
arXiv:2602.04003v1 Announce Type: new
Abstract: Most adversarial threats in artificial intelligence target the computational behavior of models rather than the humans who rely on them. Yet modern AI systems increasingly operate within human decision loops, where users interpret and act on model recommendations. Large Language Models generate fluent natural-language explanations that shape how users perceive and trust AI outputs, revealing a new attack surface at the cognitive layer: the communication channel between AI and its users. We introduce adversarial explanation attacks (AEAs), where an attacker manipulates the framing of LLM-generated explanations to modulate human trust in incorrect outputs. We formalize this behavioral threat through the trust miscalibration gap, a metric that captures the difference in human trust between correct and incorrect outputs under adversarial explanations. By incorporating this gap, AEAs explore the daunting threats in which persuasive explanations reinforce users’ trust in incorrect predictions. To characterize this threat, we conducted a controlled experiment (n = 205), systematically varying four dimensions of explanation framing: reasoning mode, evidence type, communication style, and presentation format. Our findings show that users report nearly identical trust for adversarial and benign explanations, with adversarial explanations preserving the vast majority of benign trust despite being incorrect. The most vulnerable cases arise when AEAs closely resemble expert communication, combining authoritative evidence, neutral tone, and domain-appropriate reasoning. Vulnerability is highest on hard tasks, in fact-driven domains, and among participants who are less formally educated, younger, or highly trusting of AI. This is the first systematic security study that treats explanations as an adversarial cognitive channel and quantifies their impact on human trust in AI-assisted decision making. Read More
GPT-5 lowers the cost of cell-free protein synthesisOpenAI News An autonomous lab combining OpenAI’s GPT-5 with Ginkgo Bioworks’ cloud automation cut cell-free protein synthesis costs by 40% through closed-loop experimentation.
An autonomous lab combining OpenAI’s GPT-5 with Ginkgo Bioworks’ cloud automation cut cell-free protein synthesis costs by 40% through closed-loop experimentation. Read More
AI Expo 2026 Day 2: Moving experimental pilots to AI productionAI News The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more
The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.
The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more
The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News. Read More