Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
AI News & Insights Featured Image

Fighting the New York Times’ invasion of user privacy OpenAI News

Fighting the New York Times’ invasion of user privacyOpenAI News OpenAI is fighting the New York Times’ demand for 20 million private ChatGPT conversations and accelerating new security and privacy protections to protect your data.

 OpenAI is fighting the New York Times’ demand for 20 million private ChatGPT conversations and accelerating new security and privacy protections to protect your data. Read More  

News
AI News & Insights Featured Image

Laplacian Score Sharpening for Mitigating Hallucination in Diffusion Models AI updates on arXiv.org

Laplacian Score Sharpening for Mitigating Hallucination in Diffusion Modelscs.AI updates on arXiv.org arXiv:2511.07496v1 Announce Type: cross
Abstract: Diffusion models, though successful, are known to suffer from hallucinations that create incoherent or unrealistic samples. Recent works have attributed this to the phenomenon of mode interpolation and score smoothening, but they lack a method to prevent their generation during sampling. In this paper, we propose a post-hoc adjustment to the score function during inference that leverages the Laplacian (or sharpness) of the score to reduce mode interpolation hallucination in unconditional diffusion models across 1D, 2D, and high-dimensional image data. We derive an efficient Laplacian approximation for higher dimensions using a finite-difference variant of the Hutchinson trace estimator. We show that this correction significantly reduces the rate of hallucinated samples across toy 1D/2D distributions and a high- dimensional image dataset. Furthermore, our analysis explores the relationship between the Laplacian and uncertainty in the score.

 arXiv:2511.07496v1 Announce Type: cross
Abstract: Diffusion models, though successful, are known to suffer from hallucinations that create incoherent or unrealistic samples. Recent works have attributed this to the phenomenon of mode interpolation and score smoothening, but they lack a method to prevent their generation during sampling. In this paper, we propose a post-hoc adjustment to the score function during inference that leverages the Laplacian (or sharpness) of the score to reduce mode interpolation hallucination in unconditional diffusion models across 1D, 2D, and high-dimensional image data. We derive an efficient Laplacian approximation for higher dimensions using a finite-difference variant of the Hutchinson trace estimator. We show that this correction significantly reduces the rate of hallucinated samples across toy 1D/2D distributions and a high- dimensional image dataset. Furthermore, our analysis explores the relationship between the Laplacian and uncertainty in the score. Read More  

News
AI News & Insights Featured Image

HybridGuard: Enhancing Minority-Class Intrusion Detection in Dew-Enabled Edge-of-Things Networks AI updates on arXiv.org

HybridGuard: Enhancing Minority-Class Intrusion Detection in Dew-Enabled Edge-of-Things Networkscs.AI updates on arXiv.org arXiv:2511.07793v1 Announce Type: cross
Abstract: Securing Dew-Enabled Edge-of-Things (EoT) networks against sophisticated intrusions is a critical challenge. This paper presents HybridGuard, a framework that integrates machine learning and deep learning to improve intrusion detection. HybridGuard addresses data imbalance through mutual information based feature selection, ensuring that the most relevant features are used to improve detection performance, especially for minority attack classes. The framework leverages Wasserstein Conditional Generative Adversarial Networks with Gradient Penalty (WCGAN-GP) to further reduce class imbalance and enhance detection precision. It adopts a two-phase architecture called DualNetShield to support advanced traffic analysis and anomaly detection, improving the granular identification of threats in complex EoT environments. HybridGuard is evaluated on the UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, where it demonstrates strong performance across diverse attack scenarios and outperforms existing solutions in adapting to evolving cybersecurity threats. This approach establishes HybridGuard as an effective tool for protecting EoT networks against modern intrusions.

 arXiv:2511.07793v1 Announce Type: cross
Abstract: Securing Dew-Enabled Edge-of-Things (EoT) networks against sophisticated intrusions is a critical challenge. This paper presents HybridGuard, a framework that integrates machine learning and deep learning to improve intrusion detection. HybridGuard addresses data imbalance through mutual information based feature selection, ensuring that the most relevant features are used to improve detection performance, especially for minority attack classes. The framework leverages Wasserstein Conditional Generative Adversarial Networks with Gradient Penalty (WCGAN-GP) to further reduce class imbalance and enhance detection precision. It adopts a two-phase architecture called DualNetShield to support advanced traffic analysis and anomaly detection, improving the granular identification of threats in complex EoT environments. HybridGuard is evaluated on the UNSW-NB15, CIC-IDS-2017, and IOTID20 datasets, where it demonstrates strong performance across diverse attack scenarios and outperforms existing solutions in adapting to evolving cybersecurity threats. This approach establishes HybridGuard as an effective tool for protecting EoT networks against modern intrusions. Read More  

News
AI News & Insights Featured Image

Global Optimization on Graph-Structured Data via Gaussian Processes with Spectral Representations AI updates on arXiv.org

Global Optimization on Graph-Structured Data via Gaussian Processes with Spectral Representationscs.AI updates on arXiv.org arXiv:2511.07734v1 Announce Type: cross
Abstract: Bayesian optimization (BO) is a powerful framework for optimizing expensive black-box objectives, yet extending it to graph-structured domains remains challenging due to the discrete and combinatorial nature of graphs. Existing approaches often rely on either full graph topology-impractical for large or partially observed graphs-or incremental exploration, which can lead to slow convergence. We introduce a scalable framework for global optimization over graphs that employs low-rank spectral representations to build Gaussian process (GP) surrogates from sparse structural observations. The method jointly infers graph structure and node representations through learnable embeddings, enabling efficient global search and principled uncertainty estimation even with limited data. We also provide theoretical analysis establishing conditions for accurate recovery of underlying graph structure under different sampling regimes. Experiments on synthetic and real-world datasets demonstrate that our approach achieves faster convergence and improved optimization performance compared to prior methods.

 arXiv:2511.07734v1 Announce Type: cross
Abstract: Bayesian optimization (BO) is a powerful framework for optimizing expensive black-box objectives, yet extending it to graph-structured domains remains challenging due to the discrete and combinatorial nature of graphs. Existing approaches often rely on either full graph topology-impractical for large or partially observed graphs-or incremental exploration, which can lead to slow convergence. We introduce a scalable framework for global optimization over graphs that employs low-rank spectral representations to build Gaussian process (GP) surrogates from sparse structural observations. The method jointly infers graph structure and node representations through learnable embeddings, enabling efficient global search and principled uncertainty estimation even with limited data. We also provide theoretical analysis establishing conditions for accurate recovery of underlying graph structure under different sampling regimes. Experiments on synthetic and real-world datasets demonstrate that our approach achieves faster convergence and improved optimization performance compared to prior methods. Read More  

News
AI News & Insights Featured Image

How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@k Towards Data Science

How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@kTowards Data Science The third and final part for evaluating the retrieval quality of your RAG pipeline with graded measures
The post How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@k appeared first on Towards Data Science.

 The third and final part for evaluating the retrieval quality of your RAG pipeline with graded measures
The post How to Evaluate Retrieval Quality in RAG Pipelines (Part 3): DCG@k and NDCG@k appeared first on Towards Data Science. Read More  

News
AI News & Insights Featured Image

Feature Detection, Part 2: Laplace & Gaussian Operators Towards Data Science

Feature Detection, Part 2: Laplace & Gaussian OperatorsTowards Data Science Laplace meets Gaussian — the story of two operators in edge detection
The post Feature Detection, Part 2: Laplace & Gaussian Operators appeared first on Towards Data Science.

 Laplace meets Gaussian — the story of two operators in edge detection
The post Feature Detection, Part 2: Laplace & Gaussian Operators appeared first on Towards Data Science. Read More  

News
AI News & Insights Featured Image

The Ultimate Guide to Power BI Aggregations Towards Data Science

The Ultimate Guide to Power BI AggregationsTowards Data Science Aggregations are one of the most powerful features in Power BI — learn how to leverage this feature to improve the performance of your Power BI solution
The post The Ultimate Guide to Power BI Aggregations appeared first on Towards Data Science.

 Aggregations are one of the most powerful features in Power BI — learn how to leverage this feature to improve the performance of your Power BI solution
The post The Ultimate Guide to Power BI Aggregations appeared first on Towards Data Science. Read More  

News
Baidu ERNIE multimodal AI beats GPT and Gemini in benchmarks AI News

Baidu ERNIE multimodal AI beats GPT and Gemini in benchmarks AI News

Baidu ERNIE multimodal AI beats GPT and Gemini in benchmarksAI News Baidu’s latest ERNIE model, a super-efficient multimodal AI, is beating GPT and Gemini on key benchmarks and targets enterprise data often ignored by text-focused models. For many businesses, valuable insights are locked in engineering schematics, factory-floor video feeds, medical scans, and logistics dashboards. Baidu’s new model, ERNIE-4.5-VL-28B-A3B-Thinking, is designed to fill this gap. What’s interesting
The post Baidu ERNIE multimodal AI beats GPT and Gemini in benchmarks appeared first on AI News.

 Baidu’s latest ERNIE model, a super-efficient multimodal AI, is beating GPT and Gemini on key benchmarks and targets enterprise data often ignored by text-focused models. For many businesses, valuable insights are locked in engineering schematics, factory-floor video feeds, medical scans, and logistics dashboards. Baidu’s new model, ERNIE-4.5-VL-28B-A3B-Thinking, is designed to fill this gap. What’s interesting
The post Baidu ERNIE multimodal AI beats GPT and Gemini in benchmarks appeared first on AI News. Read More