Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
AI News & Insights Featured Image

Quantum Machine Learning for Optimizing Entanglement Distribution in Quantum Sensor Circuitscs AI updates on arXiv.org

Quantum Machine Learning for Optimizing Entanglement Distribution in Quantum Sensor Circuitscs.AI updates on arXiv.orgon September 1, 2025 at 4:00 am arXiv:2508.21252v1 Announce Type: cross
Abstract: In the rapidly evolving field of quantum computing, optimizing quantum circuits for specific tasks is crucial for enhancing performance and efficiency. More recently, quantum sensing has become a distinct and rapidly growing branch of research within the area of quantum science and technology. The field is expected to provide new opportunities, especially regarding high sensitivity and precision. Entanglement is one of the key factors in achieving high sensitivity and measurement precision [3]. This paper presents a novel approach utilizing quantum machine learning techniques to optimize entanglement distribution in quantum sensor circuits. By leveraging reinforcement learning within a quantum environment, we aim to optimize the entanglement layout to maximize Quantum Fisher Information (QFI) and entanglement entropy, which are key indicators of a quantum system’s sensitivity and coherence, while minimizing circuit depth and gate counts. Our implementation, based on Qiskit, integrates noise models and error mitigation strategies to simulate realistic quantum environments. The results demonstrate significant improvements in circuit performance and sensitivity, highlighting the potential of machine learning in quantum circuit optimization by measuring high QFI and entropy in the range of 0.84-1.0 with depth and gate count reduction by 20-86%.

 arXiv:2508.21252v1 Announce Type: cross
Abstract: In the rapidly evolving field of quantum computing, optimizing quantum circuits for specific tasks is crucial for enhancing performance and efficiency. More recently, quantum sensing has become a distinct and rapidly growing branch of research within the area of quantum science and technology. The field is expected to provide new opportunities, especially regarding high sensitivity and precision. Entanglement is one of the key factors in achieving high sensitivity and measurement precision [3]. This paper presents a novel approach utilizing quantum machine learning techniques to optimize entanglement distribution in quantum sensor circuits. By leveraging reinforcement learning within a quantum environment, we aim to optimize the entanglement layout to maximize Quantum Fisher Information (QFI) and entanglement entropy, which are key indicators of a quantum system’s sensitivity and coherence, while minimizing circuit depth and gate counts. Our implementation, based on Qiskit, integrates noise models and error mitigation strategies to simulate realistic quantum environments. The results demonstrate significant improvements in circuit performance and sensitivity, highlighting the potential of machine learning in quantum circuit optimization by measuring high QFI and entropy in the range of 0.84-1.0 with depth and gate count reduction by 20-86%. Read More 

News
AI News & Insights Featured Image

A Financial Brain Scan of the LLMcs.AI updates on arXiv.org on

A Financial Brain Scan of the LLMcs.AI updates on arXiv.orgon September 1, 2025 at 4:00 am arXiv:2508.21285v1 Announce Type: cross
Abstract: Emerging techniques in computer science make it possible to “brain scan” large language models (LLMs), identify the plain-English concepts that guide their reasoning, and steer them while holding other factors constant. We show that this approach can map LLM-generated economic forecasts to concepts such as sentiment, technical analysis, and timing, and compute their relative importance without reducing performance. We also show that models can be steered to be more or less risk-averse, optimistic, or pessimistic, which allows researchers to correct or simulate biases. The method is transparent, lightweight, and replicable for empirical research in the social sciences.

 arXiv:2508.21285v1 Announce Type: cross
Abstract: Emerging techniques in computer science make it possible to “brain scan” large language models (LLMs), identify the plain-English concepts that guide their reasoning, and steer them while holding other factors constant. We show that this approach can map LLM-generated economic forecasts to concepts such as sentiment, technical analysis, and timing, and compute their relative importance without reducing performance. We also show that models can be steered to be more or less risk-averse, optimistic, or pessimistic, which allows researchers to correct or simulate biases. The method is transparent, lightweight, and replicable for empirical research in the social sciences. Read More 

News
AI News & Insights Featured Image

Fuzzy, Symbolic, and Contextual: Enhancing LLM Instruction via Cognitive Scaffoldingcs. AI updates on arXiv.org

Fuzzy, Symbolic, and Contextual: Enhancing LLM Instruction via Cognitive Scaffoldingcs.AI updates on arXiv.orgon September 1, 2025 at 4:00 am arXiv:2508.21204v1 Announce Type: new
Abstract: We study how architectural inductive biases influence the cognitive behavior of large language models (LLMs) in instructional dialogue. We introduce a symbolic scaffolding mechanism paired with a short-term memory schema designed to promote adaptive, structured reasoning in Socratic tutoring. Using controlled ablation across five system variants, we evaluate model outputs via expert-designed rubrics covering scaffolding, responsiveness, symbolic reasoning, and conversational memory. We present preliminary results using an LLM-based evaluation framework aligned to a cognitively grounded rubric. This enables scalable, systematic comparisons across architectural variants in early-stage experimentation. The preliminary results show that our full system consistently outperforms baseline variants. Analysis reveals that removing memory or symbolic structure degrades key cognitive behaviors, including abstraction, adaptive probing, and conceptual continuity. These findings support a processing-level account in which architectural scaffolds can reliably shape emergent instructional strategies in LLMs.

 arXiv:2508.21204v1 Announce Type: new
Abstract: We study how architectural inductive biases influence the cognitive behavior of large language models (LLMs) in instructional dialogue. We introduce a symbolic scaffolding mechanism paired with a short-term memory schema designed to promote adaptive, structured reasoning in Socratic tutoring. Using controlled ablation across five system variants, we evaluate model outputs via expert-designed rubrics covering scaffolding, responsiveness, symbolic reasoning, and conversational memory. We present preliminary results using an LLM-based evaluation framework aligned to a cognitively grounded rubric. This enables scalable, systematic comparisons across architectural variants in early-stage experimentation. The preliminary results show that our full system consistently outperforms baseline variants. Analysis reveals that removing memory or symbolic structure degrades key cognitive behaviors, including abstraction, adaptive probing, and conceptual continuity. These findings support a processing-level account in which architectural scaffolds can reliably shape emergent instructional strategies in LLMs. Read More 

News
AI News & Insights Featured Image

The Machine Learning Lessons I’ve Learned This Month Towards Data Science

The Machine Learning Lessons I’ve Learned This MonthTowards Data Scienceon August 31, 2025 at 2:00 pm August 2025: logging, lab notebooks, overnight runs
The post The Machine Learning Lessons I’ve Learned This Month appeared first on Towards Data Science.

 August 2025: logging, lab notebooks, overnight runs
The post The Machine Learning Lessons I’ve Learned This Month appeared first on Towards Data Science. Read More 

News
AI News & Insights Featured Image

How to Import Pre-Annotated Data into Label Studio and Run the Full Stack with DockerTowards Data Science

How to Import Pre-Annotated Data into Label Studio and Run the Full Stack with DockerTowards Data Scienceon August 29, 2025 at 12:30 pm From VOC to JSON: Importing pre-annotations made simple
The post How to Import Pre-Annotated Data into Label Studio and Run the Full Stack with Docker appeared first on Towards Data Science.

 From VOC to JSON: Importing pre-annotations made simple
The post How to Import Pre-Annotated Data into Label Studio and Run the Full Stack with Docker appeared first on Towards Data Science. Read More 

News
AI News & Insights Featured Image

Toward Digital Well-Being: Using Generative AI to Detect and Mitigate Bias in Social NetworksTowards Data Science

Toward Digital Well-Being: Using Generative AI to Detect and Mitigate Bias in Social NetworksTowards Data Scienceon August 29, 2025 at 2:45 pm This research answered the question: How can machine learning and artificial intelligence help us to unlearn bias?
The post Toward Digital Well-Being: Using Generative AI to Detect and Mitigate Bias in Social Networks appeared first on Towards Data Science.

 This research answered the question: How can machine learning and artificial intelligence help us to unlearn bias?
The post Toward Digital Well-Being: Using Generative AI to Detect and Mitigate Bias in Social Networks appeared first on Towards Data Science. Read More 

News
AI News & Insights Featured Image

Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systemscs.AI updates on arXiv.org

Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systemscs.AI updates on arXiv.orgon August 29, 2025 at 4:00 am arXiv:2508.20307v1 Announce Type: cross
Abstract: The rise of AI has transformed the software and hardware landscape, enabling powerful capabilities through specialized infrastructures, large-scale data storage, and advanced hardware. However, these innovations introduce unique attack surfaces and objectives which traditional cybersecurity assessments often overlook. Cyber attackers are shifting their objectives from conventional goals like privilege escalation and network pivoting to manipulating AI outputs to achieve desired system effects, such as slowing system performance, flooding outputs with false positives, or degrading model accuracy. This paper serves to raise awareness of the novel cyber threats that are introduced when incorporating AI into a software system. We explore the operational cybersecurity and supply chain risks across the AI lifecycle, emphasizing the need for tailored security frameworks to address evolving threats in the AI-driven landscape. We highlight previous exploitations and provide insights from working in this area. By understanding these risks, organizations can better protect AI systems and ensure their reliability and resilience.

 arXiv:2508.20307v1 Announce Type: cross
Abstract: The rise of AI has transformed the software and hardware landscape, enabling powerful capabilities through specialized infrastructures, large-scale data storage, and advanced hardware. However, these innovations introduce unique attack surfaces and objectives which traditional cybersecurity assessments often overlook. Cyber attackers are shifting their objectives from conventional goals like privilege escalation and network pivoting to manipulating AI outputs to achieve desired system effects, such as slowing system performance, flooding outputs with false positives, or degrading model accuracy. This paper serves to raise awareness of the novel cyber threats that are introduced when incorporating AI into a software system. We explore the operational cybersecurity and supply chain risks across the AI lifecycle, emphasizing the need for tailored security frameworks to address evolving threats in the AI-driven landscape. We highlight previous exploitations and provide insights from working in this area. By understanding these risks, organizations can better protect AI systems and ensure their reliability and resilience. Read More 

News
AI News & Insights Featured Image

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMscs. AI updates on arXiv.org

Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMscs.AI updates on arXiv.orgon August 29, 2025 at 4:00 am arXiv:2508.20333v1 Announce Type: cross
Abstract: Large Language Models (LLMs) are aligned to meet ethical standards and safety requirements by training them to refuse answering harmful or unsafe prompts. In this paper, we demonstrate how adversaries can exploit LLMs’ alignment to implant bias, or enforce targeted censorship without degrading the model’s responsiveness to unrelated topics. Specifically, we propose Subversive Alignment Injection (SAI), a poisoning attack that leverages the alignment mechanism to trigger refusal on specific topics or queries predefined by the adversary. Although it is perhaps not surprising that refusal can be induced through overalignment, we demonstrate how this refusal can be exploited to inject bias into the model. Surprisingly, SAI evades state-of-the-art poisoning defenses including LLM state forensics, as well as robust aggregation techniques that are designed to detect poisoning in FL settings. We demonstrate the practical dangers of this attack by illustrating its end-to-end impacts on LLM-powered application pipelines. For chat based applications such as ChatDoctor, with 1% data poisoning, the system refuses to answer healthcare questions to targeted racial category leading to high bias ($Delta DP$ of 23%). We also show that bias can be induced in other NLP tasks: for a resume selection pipeline aligned to refuse to summarize CVs from a selected university, high bias in selection ($Delta DP$ of 27%) results. Even higher bias ($Delta DP$~38%) results on 9 other chat based downstream applications.

 arXiv:2508.20333v1 Announce Type: cross
Abstract: Large Language Models (LLMs) are aligned to meet ethical standards and safety requirements by training them to refuse answering harmful or unsafe prompts. In this paper, we demonstrate how adversaries can exploit LLMs’ alignment to implant bias, or enforce targeted censorship without degrading the model’s responsiveness to unrelated topics. Specifically, we propose Subversive Alignment Injection (SAI), a poisoning attack that leverages the alignment mechanism to trigger refusal on specific topics or queries predefined by the adversary. Although it is perhaps not surprising that refusal can be induced through overalignment, we demonstrate how this refusal can be exploited to inject bias into the model. Surprisingly, SAI evades state-of-the-art poisoning defenses including LLM state forensics, as well as robust aggregation techniques that are designed to detect poisoning in FL settings. We demonstrate the practical dangers of this attack by illustrating its end-to-end impacts on LLM-powered application pipelines. For chat based applications such as ChatDoctor, with 1% data poisoning, the system refuses to answer healthcare questions to targeted racial category leading to high bias ($Delta DP$ of 23%). We also show that bias can be induced in other NLP tasks: for a resume selection pipeline aligned to refuse to summarize CVs from a selected university, high bias in selection ($Delta DP$ of 27%) results. Even higher bias ($Delta DP$~38%) results on 9 other chat based downstream applications. Read More 

News
AI News & Insights Featured Image

Data-Efficient Symbolic Regression via Foundation Model Distillationcs.AI updates on arXiv.org

Data-Efficient Symbolic Regression via Foundation Model Distillationcs.AI updates on arXiv.orgon August 28, 2025 at 4:00 am arXiv:2508.19487v1 Announce Type: cross
Abstract: Discovering interpretable mathematical equations from observed data (a.k.a. equation discovery or symbolic regression) is a cornerstone of scientific discovery, enabling transparent modeling of physical, biological, and economic systems. While foundation models pre-trained on large-scale equation datasets offer a promising starting point, they often suffer from negative transfer and poor generalization when applied to small, domain-specific datasets. In this paper, we introduce EQUATE (Equation Generation via QUality-Aligned Transfer Embeddings), a data-efficient fine-tuning framework that adapts foundation models for symbolic equation discovery in low-data regimes via distillation. EQUATE combines symbolic-numeric alignment with evaluator-guided embedding optimization, enabling a principled embedding-search-generation paradigm. Our approach reformulates discrete equation search as a continuous optimization task in a shared embedding space, guided by data-equation fitness and simplicity. Experiments across three standard public benchmarks (Feynman, Strogatz, and black-box datasets) demonstrate that EQUATE consistently outperforms state-of-the-art baselines in both accuracy and robustness, while preserving low complexity and fast inference. These results highlight EQUATE as a practical and generalizable solution for data-efficient symbolic regression in foundation model distillation settings.

 arXiv:2508.19487v1 Announce Type: cross
Abstract: Discovering interpretable mathematical equations from observed data (a.k.a. equation discovery or symbolic regression) is a cornerstone of scientific discovery, enabling transparent modeling of physical, biological, and economic systems. While foundation models pre-trained on large-scale equation datasets offer a promising starting point, they often suffer from negative transfer and poor generalization when applied to small, domain-specific datasets. In this paper, we introduce EQUATE (Equation Generation via QUality-Aligned Transfer Embeddings), a data-efficient fine-tuning framework that adapts foundation models for symbolic equation discovery in low-data regimes via distillation. EQUATE combines symbolic-numeric alignment with evaluator-guided embedding optimization, enabling a principled embedding-search-generation paradigm. Our approach reformulates discrete equation search as a continuous optimization task in a shared embedding space, guided by data-equation fitness and simplicity. Experiments across three standard public benchmarks (Feynman, Strogatz, and black-box datasets) demonstrate that EQUATE consistently outperforms state-of-the-art baselines in both accuracy and robustness, while preserving low complexity and fast inference. These results highlight EQUATE as a practical and generalizable solution for data-efficient symbolic regression in foundation model distillation settings. Read More 

News
AI News & Insights Featured Image

What Rollup News says about battling disinformation AI News

What Rollup News says about battling disinformationAI Newson August 28, 2025 at 7:41 am Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents
The post What Rollup News says about battling disinformation appeared first on AI News.

 Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents
The post What Rollup News says about battling disinformation appeared first on AI News. Read More