Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
AI News & Insights Featured Image

Distributionally Robust Control with End-to-End Statistically Guaranteed Metric Learning cs.AI updates on arXiv.org

Distributionally Robust Control with End-to-End Statistically Guaranteed Metric Learningcs.AI updates on arXiv.org arXiv:2510.10214v1 Announce Type: cross
Abstract: Wasserstein distributionally robust control (DRC) recently emerges as a principled paradigm for handling uncertainty in stochastic dynamical systems. However, it constructs data-driven ambiguity sets via uniform distribution shifts before sequentially incorporating them into downstream control synthesis. This segregation between ambiguity set construction and control objectives inherently introduces a structural misalignment, which undesirably leads to conservative control policies with sub-optimal performance. To address this limitation, we propose a novel end-to-end finite-horizon Wasserstein DRC framework that integrates the learning of anisotropic Wasserstein metrics with downstream control tasks in a closed-loop manner, thus enabling ambiguity sets to be systematically adjusted along performance-critical directions and yielding more effective control policies. This framework is formulated as a bilevel program: the inner level characterizes dynamical system evolution under DRC, while the outer level refines the anisotropic metric leveraging control-performance feedback across a range of initial conditions. To solve this program efficiently, we develop a stochastic augmented Lagrangian algorithm tailored to the bilevel structure. Theoretically, we prove that the learned ambiguity sets preserve statistical finite-sample guarantees under a novel radius adjustment mechanism, and we establish the well-posedness of the bilevel formulation by demonstrating its continuity with respect to the learnable metric. Furthermore, we show that the algorithm converges to stationary points of the outer level problem, which are statistically consistent with the optimal metric at a non-asymptotic convergence rate. Experiments on both numerical and inventory control tasks verify that the proposed framework achieves superior closed-loop performance and robustness compared against state-of-the-art methods.

 arXiv:2510.10214v1 Announce Type: cross
Abstract: Wasserstein distributionally robust control (DRC) recently emerges as a principled paradigm for handling uncertainty in stochastic dynamical systems. However, it constructs data-driven ambiguity sets via uniform distribution shifts before sequentially incorporating them into downstream control synthesis. This segregation between ambiguity set construction and control objectives inherently introduces a structural misalignment, which undesirably leads to conservative control policies with sub-optimal performance. To address this limitation, we propose a novel end-to-end finite-horizon Wasserstein DRC framework that integrates the learning of anisotropic Wasserstein metrics with downstream control tasks in a closed-loop manner, thus enabling ambiguity sets to be systematically adjusted along performance-critical directions and yielding more effective control policies. This framework is formulated as a bilevel program: the inner level characterizes dynamical system evolution under DRC, while the outer level refines the anisotropic metric leveraging control-performance feedback across a range of initial conditions. To solve this program efficiently, we develop a stochastic augmented Lagrangian algorithm tailored to the bilevel structure. Theoretically, we prove that the learned ambiguity sets preserve statistical finite-sample guarantees under a novel radius adjustment mechanism, and we establish the well-posedness of the bilevel formulation by demonstrating its continuity with respect to the learnable metric. Furthermore, we show that the algorithm converges to stationary points of the outer level problem, which are statistically consistent with the optimal metric at a non-asymptotic convergence rate. Experiments on both numerical and inventory control tasks verify that the proposed framework achieves superior closed-loop performance and robustness compared against state-of-the-art methods. Read More  

News
AI News & Insights Featured Image

The algorithmic regulatorcs.AI updates on arXiv.org

The algorithmic regulatorcs.AI updates on arXiv.org arXiv:2510.10300v1 Announce Type: cross
Abstract: The regulator theorem states that, under certain conditions, any optimal controller must embody a model of the system it regulates, grounding the idea that controllers embed, explicitly or implicitly, internal models of the controlled. This principle underpins neuroscience and predictive brain theories like the Free-Energy Principle or Kolmogorov/Algorithmic Agent theory. However, the theorem is only proven in limited settings. Here, we treat the deterministic, closed, coupled world-regulator system $(W,R)$ as a single self-delimiting program $p$ via a constant-size wrapper that produces the world output string~$x$ fed to the regulator. We analyze regulation from the viewpoint of the algorithmic complexity of the output, $K(x)$. We define $R$ to be a emph{good algorithmic regulator} if it emph{reduces} the algorithmic complexity of the readout relative to a null (unregulated) baseline $varnothing$, i.e., [ Delta = Kbig(O_{W,varnothing}big) – Kbig(O_{W,R}big) > 0. ] We then prove that the larger $Delta$ is, the more world-regulator pairs with high mutual algorithmic information are favored. More precisely, a complexity gap $Delta > 0$ yields [ Prbig((W,R)mid xbig) le C,2^{,M(W{:}R)},2^{-Delta}, ] making low $M(W{:}R)$ exponentially unlikely as $Delta$ grows. This is an AIT version of the idea that “the regulator contains a model of the world.” The framework is distribution-free, applies to individual sequences, and complements the Internal Model Principle. Beyond this necessity claim, the same coding-theorem calculus singles out a emph{canonical scalar objective} and implicates a emph{planner}. On the realized episode, a regulator behaves emph{as if} it minimized the conditional description length of the readout.

 arXiv:2510.10300v1 Announce Type: cross
Abstract: The regulator theorem states that, under certain conditions, any optimal controller must embody a model of the system it regulates, grounding the idea that controllers embed, explicitly or implicitly, internal models of the controlled. This principle underpins neuroscience and predictive brain theories like the Free-Energy Principle or Kolmogorov/Algorithmic Agent theory. However, the theorem is only proven in limited settings. Here, we treat the deterministic, closed, coupled world-regulator system $(W,R)$ as a single self-delimiting program $p$ via a constant-size wrapper that produces the world output string~$x$ fed to the regulator. We analyze regulation from the viewpoint of the algorithmic complexity of the output, $K(x)$. We define $R$ to be a emph{good algorithmic regulator} if it emph{reduces} the algorithmic complexity of the readout relative to a null (unregulated) baseline $varnothing$, i.e., [ Delta = Kbig(O_{W,varnothing}big) – Kbig(O_{W,R}big) > 0. ] We then prove that the larger $Delta$ is, the more world-regulator pairs with high mutual algorithmic information are favored. More precisely, a complexity gap $Delta > 0$ yields [ Prbig((W,R)mid xbig) le C,2^{,M(W{:}R)},2^{-Delta}, ] making low $M(W{:}R)$ exponentially unlikely as $Delta$ grows. This is an AIT version of the idea that “the regulator contains a model of the world.” The framework is distribution-free, applies to individual sequences, and complements the Internal Model Principle. Beyond this necessity claim, the same coding-theorem calculus singles out a emph{canonical scalar objective} and implicates a emph{planner}. On the realized episode, a regulator behaves emph{as if} it minimized the conditional description length of the readout. Read More  

Daily AI News
How Huawei is building agentic AI systems that make decisions independently AI News

How Huawei is building agentic AI systems that make decisions independently AI News

How Huawei is building agentic AI systems that make decisions independentlyAI News In a cement plant operated by Conch Group, an agentic AI system built on Huawei infrastructure now predicts the strength of clinker with over 90% accuracy and autonomously adjusts calcination parameters to cut coal consumption by 1%—decisions that previously required human expertise accumulated over decades This exemplifies how Huawei is developing agentic AI systems that
The post How Huawei is building agentic AI systems that make decisions independently appeared first on AI News.

 In a cement plant operated by Conch Group, an agentic AI system built on Huawei infrastructure now predicts the strength of clinker with over 90% accuracy and autonomously adjusts calcination parameters to cut coal consumption by 1%—decisions that previously required human expertise accumulated over decades This exemplifies how Huawei is developing agentic AI systems that
The post How Huawei is building agentic AI systems that make decisions independently appeared first on AI News. Read More  

Daily AI News
AI News & Insights Featured Image

90% of science is lost. This new AI just found it Artificial Intelligence News — ScienceDaily

90% of science is lost. This new AI just found itArtificial Intelligence News — ScienceDaily Vast amounts of valuable research data remain unused, trapped in labs or lost to time. Frontiers aims to change that with FAIR² Data Management, a groundbreaking AI-driven system that makes datasets reusable, verifiable, and citable. By uniting curation, compliance, peer review, and interactive visualization in one platform, FAIR² empowers scientists to share their work responsibly and gain recognition.

 Vast amounts of valuable research data remain unused, trapped in labs or lost to time. Frontiers aims to change that with FAIR² Data Management, a groundbreaking AI-driven system that makes datasets reusable, verifiable, and citable. By uniting curation, compliance, peer review, and interactive visualization in one platform, FAIR² empowers scientists to share their work responsibly and gain recognition. Read More  

News
Here’s When You Would Choose Spreadsheets Over SQL KDnuggets

Here’s When You Would Choose Spreadsheets Over SQL KDnuggets

Here’s When You Would Choose Spreadsheets Over SQLKDnuggets Spreadsheets might seem obsolete in the world of relational databases. They’re not! Here are situations when spreadsheets easily topple SQL.

 Spreadsheets might seem obsolete in the world of relational databases. They’re not! Here are situations when spreadsheets easily topple SQL. Read More  

News
Make agents a reality with Amazon Bedrock AgentCore: Now generally available. Artificial Intelligence

Make agents a reality with Amazon Bedrock AgentCore: Now generally available. Artificial Intelligence

Make agents a reality with Amazon Bedrock AgentCore: Now generally availableArtificial Intelligence Learn why customers choose AgentCore to build secure, reliable AI solutions using their choice of frameworks and models for production workloads.

 Learn why customers choose AgentCore to build secure, reliable AI solutions using their choice of frameworks and models for production workloads. Read More