Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

News
ai expo world 728x 90 01 DQLsn0

DeepSeek reverts to Nvidia for R2 model after Huawei AI chip failsAI News

DeepSeek reverts to Nvidia for R2 model after Huawei AI chip failsAI Newson August 14, 2025 at 4:04 pm DeepSeek’s plan to train its new AI model, R2, on Huawei’s Ascend chips has failed and forced a retreat to Nvidia while delaying launch. For months, the narrative pushed by Beijing has been one of unstoppable technological progress and a march towards self-sufficiency. However, reality has a habit of biting back. The recent troubles of
The post DeepSeek reverts to Nvidia for R2 model after Huawei AI chip fails appeared first on AI News.

 DeepSeek’s plan to train its new AI model, R2, on Huawei’s Ascend chips has failed and forced a retreat to Nvidia while delaying launch. For months, the narrative pushed by Beijing has been one of unstoppable technological progress and a march towards self-sufficiency. However, reality has a habit of biting back. The recent troubles of
The post DeepSeek reverts to Nvidia for R2 model after Huawei AI chip fails appeared first on AI News. Read More 

Solutions Architect Associate
AWS Certified Solutions Architect - Associate (SAA-C03)

AWS Certified Solutions Architect – Associate (SAA-C03): Career Value & Professional Impact 2025

Authored by Derrick Jackson & Co-Author Lisa Yu Most people think cloud certifications are just technical checkboxes. They’re wrong. The AWS Certified Solutions Architect – Associate certification stands out as the most widely held credential, with 54% of AWS professionals holding this specific certification, placing it far ahead of others like the Cloud Practitioner (37%) and Developer – […]

Certified Cybersecurity
CC - Certified in Cybersecurity Certification, ISC CC, CC - Certified in Cybersecurity Certification

Ultimate CC – Certified in Cybersecurity Overview: Proven Career Launch & Complete Foundation 2025

Authored by Derrick Jackson & Co-Author Lisa Yu CC – Certified in Cybersecurity Overview: August 31, 2022. That’s when (ISC)² launched something different in cybersecurity education. Not another expensive barrier to entry, but a genuinely accessible pathway into the field. The CC certification breaks the traditional mold. While most cybersecurity credentials demand years of experience or thousands in […]

Daily AI News News
AI News & Insights Featured Image

Securing Educational LLMs: A Generalised Taxonomy of Attacks on LLMs and DREAD Risk Assessmentcs.AI updates on arXiv.org

Securing Educational LLMs: A Generalised Taxonomy of Attacks on LLMs and DREAD Risk Assessmentcs.AI updates on arXiv.orgon August 13, 2025 at 4:00 am arXiv:2508.08629v1 Announce Type: cross
Abstract: Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns. A comprehensive landscape of contemporary attacks on LLMs and their impact on the educational environment is missing. This study presents a generalised taxonomy of fifty attacks on LLMs, which are categorized as attacks targeting either models or their infrastructure. The severity of these attacks is evaluated in the educational sector using the DREAD risk assessment framework. Our risk assessment indicates that token smuggling, adversarial prompts, direct injection, and multi-step jailbreak are critical attacks on eLLMs. The proposed taxonomy, its application in the educational environment, and our risk assessment will help academic and industrial practitioners to build resilient solutions that protect learners and institutions.

 arXiv:2508.08629v1 Announce Type: cross
Abstract: Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns. A comprehensive landscape of contemporary attacks on LLMs and their impact on the educational environment is missing. This study presents a generalised taxonomy of fifty attacks on LLMs, which are categorized as attacks targeting either models or their infrastructure. The severity of these attacks is evaluated in the educational sector using the DREAD risk assessment framework. Our risk assessment indicates that token smuggling, adversarial prompts, direct injection, and multi-step jailbreak are critical attacks on eLLMs. The proposed taxonomy, its application in the educational environment, and our risk assessment will help academic and industrial practitioners to build resilient solutions that protect learners and institutions. Read More 

Daily AI News News
AI News & Insights Featured Image

Coconut: A Framework for Latent Reasoning in LLMsTowards Data Science

Coconut: A Framework for Latent Reasoning in LLMsTowards Data Scienceon August 12, 2025 at 5:54 pm Explaining Coconut (Training Large Language Models to Reason in a Continuous Latent Space) in simple terms
The post Coconut: A Framework for Latent Reasoning in LLMs appeared first on Towards Data Science.

 Explaining Coconut (Training Large Language Models to Reason in a Continuous Latent Space) in simple terms
The post Coconut: A Framework for Latent Reasoning in LLMs appeared first on Towards Data Science. Read More 

SSCP
SSCP, ISC2, SSCP Exam, SSCP Certification

SSCP Certification Guide: Hands-On Security Skills & Career Impact 2025

Authored by Derrick Jackson & Co-Author Lisa Yu SSCP Certification Overview: PayScale surveyed 553 SSCP certification holders in 2025. Average base salary: $84,000 (PayScale SSCP Salary Data). Not bad for a credential most people haven’t heard of. The Systems Security Certified Practitioner targets a specific niche. You’re not the CISO making boardroom presentations. You’re configuring firewalls. Managing user […]

Daily AI News News
anthropic claude safety ai strategy cycle artificial intelligence 1024x674 khcNLR

Anthropic details its AI safety strategy AI Newson

Anthropic details its AI safety strategyAI Newson August 13, 2025 at 9:55 am Anthropic has detailed its safety strategy to try and keep its popular AI model, Claude, helpful while avoiding perpetuating harms. Central to this effort is Anthropic’s Safeguards team; who aren’t your average tech support group, they’re a mix of policy experts, data scientists, engineers, and threat analysts who know how bad actors think. However, Anthropic’s
The post Anthropic details its AI safety strategy appeared first on AI News.

 Anthropic has detailed its safety strategy to try and keep its popular AI model, Claude, helpful while avoiding perpetuating harms. Central to this effort is Anthropic’s Safeguards team; who aren’t your average tech support group, they’re a mix of policy experts, data scientists, engineers, and threat analysts who know how bad actors think. However, Anthropic’s
The post Anthropic details its AI safety strategy appeared first on AI News. Read More 

News
AI News & Insights Featured Image

Do Biased Models Have Biased Thoughts?cs.AI updates on arXiv.org

Do Biased Models Have Biased Thoughts?cs.AI updates on arXiv.orgon August 13, 2025 at 4:00 am arXiv:2508.06671v2 Announce Type: replace-cross
Abstract: The impressive performance of language models is undeniable. However, the presence of biases based on gender, race, socio-economic status, physical appearance, and sexual orientation makes the deployment of language models challenging. This paper studies the effect of chain-of-thought prompting, a recent approach that studies the steps followed by the model before it responds, on fairness. More specifically, we ask the following question: $textit{Do biased models have biased thoughts}$? To answer our question, we conduct experiments on $5$ popular large language models using fairness metrics to quantify $11$ different biases in the model’s thoughts and output. Our results show that the bias in the thinking steps is not highly correlated with the output bias (less than $0.6$ correlation with a $p$-value smaller than $0.001$ in most cases). In other words, unlike human beings, the tested models with biased decisions do not always possess biased thoughts.

 arXiv:2508.06671v2 Announce Type: replace-cross
Abstract: The impressive performance of language models is undeniable. However, the presence of biases based on gender, race, socio-economic status, physical appearance, and sexual orientation makes the deployment of language models challenging. This paper studies the effect of chain-of-thought prompting, a recent approach that studies the steps followed by the model before it responds, on fairness. More specifically, we ask the following question: $textit{Do biased models have biased thoughts}$? To answer our question, we conduct experiments on $5$ popular large language models using fairness metrics to quantify $11$ different biases in the model’s thoughts and output. Our results show that the bias in the thinking steps is not highly correlated with the output bias (less than $0.6$ correlation with a $p$-value smaller than $0.001$ in most cases). In other words, unlike human beings, the tested models with biased decisions do not always possess biased thoughts. Read More 

Daily AI News News
GettyImages 2216139641 crop vDuK98

Why Trump’s “golden dome” missile defense idea is another ripped straight from the moviesMIT Technology Review

Why Trump’s “golden dome” missile defense idea is another ripped straight from the moviesMIT Technology Reviewon August 13, 2025 at 10:00 am In 1940, a fresh-faced Ronald Reagan starred as US Secret Service agent Brass Bancroft in Murder in the Air, an action film centered on a fictional “superweapon” that could stop enemy aircraft midflight. A mock newspaper in the movie hails it as the “greatest peace argument ever invented.” The experimental weapon is “the exclusive property…

 In 1940, a fresh-faced Ronald Reagan starred as US Secret Service agent Brass Bancroft in Murder in the Air, an action film centered on a fictional “superweapon” that could stop enemy aircraft midflight. A mock newspaper in the movie hails it as the “greatest peace argument ever invented.” The experimental weapon is “the exclusive property… Read More 

News
AI News & Insights Featured Image

How We Reduced LLM Costs by 90% with 5 Lines of CodeTowards Data Science

How We Reduced LLM Costs by 90% with 5 Lines of CodeTowards Data Scienceon August 21, 2025 at 7:08 pm When clean code hides inefficiencies: what we learned from fixing a few lines of code and saving 90% in LLM cost.
The post How We Reduced LLM Costs by 90% with 5 Lines of Code appeared first on Towards Data Science.

 When clean code hides inefficiencies: what we learned from fixing a few lines of code and saving 90% in LLM cost.
The post How We Reduced LLM Costs by 90% with 5 Lines of Code appeared first on Towards Data Science. Read More