Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
AI News & Insights Featured Image

The AI Hype Index: The White House’s war on “woke AI”MIT Technology Review on July 30, 2025

The AI Hype Index: The White House’s war on “woke AI”MIT Technology Reviewon July 30, 2025 at 3:37 pm Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. The Trump administration recently declared war on so-called “woke AI,” issuing an executive order aimed at preventing companies whose models exhibit a liberal…

 Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. The Trump administration recently declared war on so-called “woke AI,” issuing an executive order aimed at preventing companies whose models exhibit a liberal… Read More 

Daily AI News News
AD 4nXfxYEVRxlNSGclUYHWCvjvjkgPS GRiMzXer87xD8duNgA42jNCJW1sKC9f5WxDJgvPvDipoJ5OLDGDuJ7wCRUelL2hd3Hi2Ag0A8y2z CBiPXvp43MjeO951oxDUErLsnZXnwqqg US6oUT

The Download: how China’s universities approach AI, and the pitfalls of welfare algorithms MIT Technology Review on July 28, 2025 at 12:10 pm

The Download: how China’s universities approach AI, and the pitfalls of welfare algorithmsMIT Technology Reviewon July 28, 2025 at 12:10 pm This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Chinese universities want students to use more AI, not less Just two years ago, students in China were told to avoid using AI for their assignments. At the time, to get around a…

 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Chinese universities want students to use more AI, not less Just two years ago, students in China were told to avoid using AI for their assignments. At the time, to get around a… Read More 

Daily AI News News
AI News & Insights Featured Image

End-to-End AWS RDS Setup with Bastion Host Using TerraformTowards Data Scienceon July 28, 2025 at 3:27 pm

End-to-End AWS RDS Setup with Bastion Host Using TerraformTowards Data Scienceon July 28, 2025 at 3:27 pm Learn how to automate secure AWS infrastructure using Terraform — including VPC, public/private subnets, a MySQL RDS database, and a Bastion host for secure access.
The post End-to-End AWS RDS Setup with Bastion Host Using Terraform appeared first on Towards Data Science.

 Learn how to automate secure AWS infrastructure using Terraform — including VPC, public/private subnets, a MySQL RDS database, and a Bastion host for secure access.
The post End-to-End AWS RDS Setup with Bastion Host Using Terraform appeared first on Towards Data Science. Read More 

Daily AI News News
ai expo world 728x 90 01

China doubles chooses AI self-reliance amid intense US competition AI Newson July 29, 2025 at 10:01 am

China doubles chooses AI self-reliance amid intense US competitionAI Newson July 29, 2025 at 10:01 am The artificial intelligence sector in China has entered a new phase intensifying AI competition with the United States, as Chinese megacities launch massive subsidy programmes. At the same time, domestic firms are hoping to reduce their dependence on US technology. The stakes extend far beyond technological supremacy, with both nations viewing AI dominance as critical
The post China doubles chooses AI self-reliance amid intense US competition appeared first on AI News.

 The artificial intelligence sector in China has entered a new phase intensifying AI competition with the United States, as Chinese megacities launch massive subsidy programmes. At the same time, domestic firms are hoping to reduce their dependence on US technology. The stakes extend far beyond technological supremacy, with both nations viewing AI dominance as critical
The post China doubles chooses AI self-reliance amid intense US competition appeared first on AI News. Read More 

Daily AI News News
AI News & Insights Featured Image

How Your Prompts Lead AI AstrayTowards Data Scienceo

How Your Prompts Lead AI AstrayTowards Data Scienceon July 29, 2025 at 4:08 pm Practical tips to recognise and avoid prompt bias.
The post How Your Prompts Lead AI Astray appeared first on Towards Data Science.

 Practical tips to recognise and avoid prompt bias.
The post How Your Prompts Lead AI Astray appeared first on Towards Data Science. Read More 

Daily AI News News
AI News & Insights Featured Image

How to Evaluate Graph Retrieval in MCP Agentic Systems Towards – Data Science

How to Evaluate Graph Retrieval in MCP Agentic SystemsTowards Data Scienceon July 29, 2025 at 3:33 pm A framework for measuring retrieval quality in Model Context Protocol agents.
The post How to Evaluate Graph Retrieval in MCP Agentic Systems appeared first on Towards Data Science.

 A framework for measuring retrieval quality in Model Context Protocol agents.
The post How to Evaluate Graph Retrieval in MCP Agentic Systems appeared first on Towards Data Science. Read More 

Daily AI News News
AI News & Insights Featured Image

OpenAI is launching a version of ChatGPT for college students MIT Technology Review

OpenAI is launching a version of ChatGPT for college studentsMIT Technology Reviewon July 29, 2025 at 5:18 pm OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. It’s part of a wider push by the company to get AI more embedded into classrooms when the new academic year starts in September. A demonstration…

 OpenAI is launching Study Mode, a version of ChatGPT for college students that it promises will act less like a lookup tool and more like a friendly, always-available tutor. It’s part of a wider push by the company to get AI more embedded into classrooms when the new academic year starts in September. A demonstration… Read More 

Daily AI News News
AI News & Insights Featured Image

Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilitiescs.AI updates on arXiv.orgon July 28, 2025 at 4:00 am

Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilitiescs.AI updates on arXiv.orgon July 28, 2025 at 4:00 am arXiv:2502.05209v4 Announce Type: replace-cross
Abstract: Evaluations of large language model (LLM) risks and capabilities are increasingly being incorporated into AI risk management and governance frameworks. Currently, most risk evaluations are conducted by designing inputs that elicit harmful behaviors from the system. However, this approach suffers from two limitations. First, input-output evaluations cannot fully evaluate realistic risks from open-weight models. Second, the behaviors identified during any particular input-output evaluation can only lower-bound the model’s worst-possible-case input-output behavior. As a complementary method for eliciting harmful behaviors, we propose evaluating LLMs with model tampering attacks which allow for modifications to latent activations or weights. We pit state-of-the-art techniques for removing harmful LLM capabilities against a suite of 5 input-space and 6 model tampering attacks. In addition to benchmarking these methods against each other, we show that (1) model resilience to capability elicitation attacks lies on a low-dimensional robustness subspace; (2) the success rate of model tampering attacks can empirically predict and offer conservative estimates for the success of held-out input-space attacks; and (3) state-of-the-art unlearning methods can easily be undone within 16 steps of fine-tuning. Together, these results highlight the difficulty of suppressing harmful LLM capabilities and show that model tampering attacks enable substantially more rigorous evaluations than input-space attacks alone.

 arXiv:2502.05209v4 Announce Type: replace-cross
Abstract: Evaluations of large language model (LLM) risks and capabilities are increasingly being incorporated into AI risk management and governance frameworks. Currently, most risk evaluations are conducted by designing inputs that elicit harmful behaviors from the system. However, this approach suffers from two limitations. First, input-output evaluations cannot fully evaluate realistic risks from open-weight models. Second, the behaviors identified during any particular input-output evaluation can only lower-bound the model’s worst-possible-case input-output behavior. As a complementary method for eliciting harmful behaviors, we propose evaluating LLMs with model tampering attacks which allow for modifications to latent activations or weights. We pit state-of-the-art techniques for removing harmful LLM capabilities against a suite of 5 input-space and 6 model tampering attacks. In addition to benchmarking these methods against each other, we show that (1) model resilience to capability elicitation attacks lies on a low-dimensional robustness subspace; (2) the success rate of model tampering attacks can empirically predict and offer conservative estimates for the success of held-out input-space attacks; and (3) state-of-the-art unlearning methods can easily be undone within 16 steps of fine-tuning. Together, these results highlight the difficulty of suppressing harmful LLM capabilities and show that model tampering attacks enable substantially more rigorous evaluations than input-space attacks alone. Read More 

Daily AI News News
AI News & Insights Featured Image

Chinese universities want students to use more AI, not less MIT Technology Review on July 28, 2025 at 9:00 am

Chinese universities want students to use more AI, not lessMIT Technology Reviewon July 28, 2025 at 9:00 am Just two years ago, Lorraine He, now a 24-year-old law student,  was told to avoid using AI for her assignments. At the time, to get around a national block on ChatGPT, students had to buy a mirror-site version from a secondhand marketplace. Its use was common, but it was at best tolerated and more often…

 Just two years ago, Lorraine He, now a 24-year-old law student,  was told to avoid using AI for her assignments. At the time, to get around a national block on ChatGPT, students had to buy a mirror-site version from a secondhand marketplace. Its use was common, but it was at best tolerated and more often… Read More 

News
AI News & Insights Featured Image

Distilling a Small Utility-Based Passage Selector to Enhance Retrieval-Augmented Generationcs.AI updates on arXiv.orgon July 28, 2025 at 4:00 am

Distilling a Small Utility-Based Passage Selector to Enhance Retrieval-Augmented Generationcs.AI updates on arXiv.orgon July 28, 2025 at 4:00 am arXiv:2507.19102v1 Announce Type: cross
Abstract: Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating retrieved information. Standard retrieval process prioritized relevance, focusing on topical alignment between queries and passages. In contrast, in RAG, the emphasis has shifted to utility, which considers the usefulness of passages for generating accurate answers. Despite empirical evidence showing the benefits of utility-based retrieval in RAG, the high computational cost of using LLMs for utility judgments limits the number of passages evaluated. This restriction is problematic for complex queries requiring extensive information. To address this, we propose a method to distill the utility judgment capabilities of LLMs into smaller, more efficient models. Our approach focuses on utility-based selection rather than ranking, enabling dynamic passage selection tailored to specific queries without the need for fixed thresholds. We train student models to learn pseudo-answer generation and utility judgments from teacher LLMs, using a sliding window method that dynamically selects useful passages. Our experiments demonstrate that utility-based selection provides a flexible and cost-effective solution for RAG, significantly reducing computational costs while improving answer quality. We present the distillation results using Qwen3-32B as the teacher model for both relevance ranking and utility-based selection, distilled into RankQwen1.7B and UtilityQwen1.7B. Our findings indicate that for complex questions, utility-based selection is more effective than relevance ranking in enhancing answer generation performance. We will release the relevance ranking and utility-based selection annotations for the MS MARCO dataset, supporting further research in this area.

 arXiv:2507.19102v1 Announce Type: cross
Abstract: Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating retrieved information. Standard retrieval process prioritized relevance, focusing on topical alignment between queries and passages. In contrast, in RAG, the emphasis has shifted to utility, which considers the usefulness of passages for generating accurate answers. Despite empirical evidence showing the benefits of utility-based retrieval in RAG, the high computational cost of using LLMs for utility judgments limits the number of passages evaluated. This restriction is problematic for complex queries requiring extensive information. To address this, we propose a method to distill the utility judgment capabilities of LLMs into smaller, more efficient models. Our approach focuses on utility-based selection rather than ranking, enabling dynamic passage selection tailored to specific queries without the need for fixed thresholds. We train student models to learn pseudo-answer generation and utility judgments from teacher LLMs, using a sliding window method that dynamically selects useful passages. Our experiments demonstrate that utility-based selection provides a flexible and cost-effective solution for RAG, significantly reducing computational costs while improving answer quality. We present the distillation results using Qwen3-32B as the teacher model for both relevance ranking and utility-based selection, distilled into RankQwen1.7B and UtilityQwen1.7B. Our findings indicate that for complex questions, utility-based selection is more effective than relevance ranking in enhancing answer generation performance. We will release the relevance ranking and utility-based selection annotations for the MS MARCO dataset, supporting further research in this area. Read More