Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Insights News
ai expo world 728x 90 01 jAXwaq

Alan Turing Institute: Humanities are key to the future of AI AI News

Alan Turing Institute: Humanities are key to the future of AIAI Newson August 7, 2025 at 3:18 pm A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development. For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation
The post Alan Turing Institute: Humanities are key to the future of AI appeared first on AI News.

 A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development. For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation
The post Alan Turing Institute: Humanities are key to the future of AI appeared first on AI News. Read More 

Insights News
AI News & Insights Featured Image

Compressing Large Language Models with PCA Without Performance Losscs.AI updates on arXiv.org

Compressing Large Language Models with PCA Without Performance Losscs.AI updates on arXiv.orgon August 7, 2025 at 4:00 am arXiv:2508.04307v1 Announce Type: cross
Abstract: We demonstrate that Principal Component Analysis (PCA), when applied in a structured manner, either to polar-transformed images or segment-wise to token sequences, enables extreme compression of neural models without sacrificing performance. Across three case studies, we show that a one-layer classifier trained on PCA-compressed polar MNIST achieves over 98 percent accuracy using only 840 parameters. A two-layer transformer trained on 70-dimensional PCA-reduced MiniLM embeddings reaches 76.62 percent accuracy on the 20 Newsgroups dataset with just 81000 parameters. A decoder-only transformer generates coherent token sequences from 70-dimensional PCA embeddings while preserving over 97 percent cosine similarity with full MiniLM representations, using less than 17 percent of the parameter count of GPT-2. These results highlight PCA-based input compression as a general and effective strategy for aligning model capacity with information content, enabling lightweight architectures across multiple modalities.

 arXiv:2508.04307v1 Announce Type: cross
Abstract: We demonstrate that Principal Component Analysis (PCA), when applied in a structured manner, either to polar-transformed images or segment-wise to token sequences, enables extreme compression of neural models without sacrificing performance. Across three case studies, we show that a one-layer classifier trained on PCA-compressed polar MNIST achieves over 98 percent accuracy using only 840 parameters. A two-layer transformer trained on 70-dimensional PCA-reduced MiniLM embeddings reaches 76.62 percent accuracy on the 20 Newsgroups dataset with just 81000 parameters. A decoder-only transformer generates coherent token sequences from 70-dimensional PCA embeddings while preserving over 97 percent cosine similarity with full MiniLM representations, using less than 17 percent of the parameter count of GPT-2. These results highlight PCA-based input compression as a general and effective strategy for aligning model capacity with information content, enabling lightweight architectures across multiple modalities. Read More 

Insights News
AI News & Insights Featured Image

The greenhouse gases we’re not accounting forMIT Technology Review

The greenhouse gases we’re not accounting forMIT Technology Reviewon August 7, 2025 at 9:00 am In the spring of 2021, climate scientists were stumped.  The global economy was just emerging from the covid-19 lockdowns, but for some reason the levels of methane—a greenhouse gas emitted mainly through agriculture and fossil-fuel production—had soared in the atmosphere the previous year, rising at the fastest rate on record. Researchers around the world set…

 In the spring of 2021, climate scientists were stumped.  The global economy was just emerging from the covid-19 lockdowns, but for some reason the levels of methane—a greenhouse gas emitted mainly through agriculture and fossil-fuel production—had soared in the atmosphere the previous year, rising at the fastest rate on record. Researchers around the world set… Read More 

Insights News
ai expo world 728x 90 01

AI obsession is costing us our human skillsAI News

AI obsession is costing us our human skillsAI Newson August 6, 2025 at 3:48 pm A growing body of evidence suggests that over-reliance on AI could be eroding the human skills needed to use it effectively. Research warns this emerging human skills deficit threatens the successful adoption of AI and, with it, an opportunity for economic growth. It feels like not a day goes by without another proclamation about how
The post AI obsession is costing us our human skills appeared first on AI News.

 A growing body of evidence suggests that over-reliance on AI could be eroding the human skills needed to use it effectively. Research warns this emerging human skills deficit threatens the successful adoption of AI and, with it, an opportunity for economic growth. It feels like not a day goes by without another proclamation about how
The post AI obsession is costing us our human skills appeared first on AI News. Read More 

Insights News
ai expo world 728x 90 01 PNNtli

Generative AI trends 2025: LLMs, data scaling & enterprise adoption AI Newson

Generative AI trends 2025: LLMs, data scaling & enterprise adoptionAI Newson August 6, 2025 at 3:02 pm Generative AI is entering a more mature phase in 2025. Models are being refined for accuracy and efficiency, and enterprises are embedding them into everyday workflows. The focus is shifting from what these systems could do to how they can be applied reliably and at scale. What’s emerging is a clearer picture of what it
The post Generative AI trends 2025: LLMs, data scaling & enterprise adoption appeared first on AI News.

 Generative AI is entering a more mature phase in 2025. Models are being refined for accuracy and efficiency, and enterprises are embedding them into everyday workflows. The focus is shifting from what these systems could do to how they can be applied reliably and at scale. What’s emerging is a clearer picture of what it
The post Generative AI trends 2025: LLMs, data scaling & enterprise adoption appeared first on AI News. Read More 

Insights News
AI News & Insights Featured Image

Five ways that AI is learning to improve itselfMIT Technology Review

Five ways that AI is learning to improve itselfMIT Technology Reviewon August 6, 2025 at 3:14 pm Last week, Mark Zuckerberg declared that Meta is aiming to achieve smarter-than-human AI. He seems to have a recipe for achieving that goal, and the first ingredient is human talent: Zuckerberg has reportedly tried to lure top researchers to Meta Superintelligence Labs with nine-figure offers. The second ingredient is AI itself.  Zuckerberg recently said on…

 Last week, Mark Zuckerberg declared that Meta is aiming to achieve smarter-than-human AI. He seems to have a recipe for achieving that goal, and the first ingredient is human talent: Zuckerberg has reportedly tried to lure top researchers to Meta Superintelligence Labs with nine-figure offers. The second ingredient is AI itself.  Zuckerberg recently said on… Read More 

Insights News
AI News & Insights Featured Image

PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel PennyLane-Centric Datasetcs.AI updates on arXiv.org

PennyLang: Pioneering LLM-Based Quantum Code Generation with a Novel PennyLane-Centric Datasetcs.AI updates on arXiv.orgon August 6, 2025 at 4:00 am arXiv:2503.02497v3 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) offer powerful capabilities in code generation, natural language understanding, and domain-specific reasoning. Their application to quantum software development remains limited, in part because of the lack of high-quality datasets both for LLM training and as dependable knowledge sources. To bridge this gap, we introduce PennyLang, an off-the-shelf, high-quality dataset of 3,347 PennyLane-specific quantum code samples with contextual descriptions, curated from textbooks, official documentation, and open-source repositories. Our contributions are threefold: (1) the creation and open-source release of PennyLang, a purpose-built dataset for quantum programming with PennyLane; (2) a framework for automated quantum code dataset construction that systematizes curation, annotation, and formatting to maximize downstream LLM usability; and (3) a baseline evaluation of the dataset across multiple open-source models, including ablation studies, all conducted within a retrieval-augmented generation (RAG) pipeline. Using PennyLang with RAG substantially improves performance: for example, Qwen 7B’s success rate rises from 8.7% without retrieval to 41.7% with full-context augmentation, and LLaMa 4 improves from 78.8% to 84.8%, while also reducing hallucinations and enhancing quantum code correctness. Moving beyond Qiskit-focused studies, we bring LLM-based tools and reproducible methods to PennyLane for advancing AI-assisted quantum development.

 arXiv:2503.02497v3 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) offer powerful capabilities in code generation, natural language understanding, and domain-specific reasoning. Their application to quantum software development remains limited, in part because of the lack of high-quality datasets both for LLM training and as dependable knowledge sources. To bridge this gap, we introduce PennyLang, an off-the-shelf, high-quality dataset of 3,347 PennyLane-specific quantum code samples with contextual descriptions, curated from textbooks, official documentation, and open-source repositories. Our contributions are threefold: (1) the creation and open-source release of PennyLang, a purpose-built dataset for quantum programming with PennyLane; (2) a framework for automated quantum code dataset construction that systematizes curation, annotation, and formatting to maximize downstream LLM usability; and (3) a baseline evaluation of the dataset across multiple open-source models, including ablation studies, all conducted within a retrieval-augmented generation (RAG) pipeline. Using PennyLang with RAG substantially improves performance: for example, Qwen 7B’s success rate rises from 8.7% without retrieval to 41.7% with full-context augmentation, and LLaMa 4 improves from 78.8% to 84.8%, while also reducing hallucinations and enhancing quantum code correctness. Moving beyond Qiskit-focused studies, we bring LLM-based tools and reproducible methods to PennyLane for advancing AI-assisted quantum development. Read More 

Insights News
IL Insitu Moon 2022 b8O1CH scaled

The Download: OpenAI’s open-weight models, and the future of internet search MIT Technology Review

The Download: OpenAI’s open-weight models, and the future of internet searchMIT Technology Reviewon August 6, 2025 at 12:10 pm This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI has finally released open-weight language models The news: OpenAI has finally released its first open-weight large language models since 2019’s GPT-2. Unlike the models available through OpenAI’s web interface, these new open…

 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI has finally released open-weight language models The news: OpenAI has finally released its first open-weight large language models since 2019’s GPT-2. Unlike the models available through OpenAI’s web interface, these new open… Read More 

Insights News
AI News & Insights Featured Image

Reinitializing weights vs units for maintaining plasticity in neural networkscs.AI updates on arXiv.org

Reinitializing weights vs units for maintaining plasticity in neural networkscs.AI updates on arXiv.orgon August 4, 2025 at 4:00 am arXiv:2508.00212v1 Announce Type: cross
Abstract: Loss of plasticity is a phenomenon in which a neural network loses its ability to learn when trained for an extended time on non-stationary data. It is a crucial problem to overcome when designing systems that learn continually. An effective technique for preventing loss of plasticity is reinitializing parts of the network. In this paper, we compare two different reinitialization schemes: reinitializing units vs reinitializing weights. We propose a new algorithm, which we name textit{selective weight reinitialization}, for reinitializing the least useful weights in a network. We compare our algorithm to continual backpropagation and ReDo, two previously proposed algorithms that reinitialize units in the network. Through our experiments in continual supervised learning problems, we identify two settings when reinitializing weights is more effective at maintaining plasticity than reinitializing units: (1) when the network has a small number of units and (2) when the network includes layer normalization. Conversely, reinitializing weights and units are equally effective at maintaining plasticity when the network is of sufficient size and does not include layer normalization. We found that reinitializing weights maintains plasticity in a wider variety of settings than reinitializing units.

 arXiv:2508.00212v1 Announce Type: cross
Abstract: Loss of plasticity is a phenomenon in which a neural network loses its ability to learn when trained for an extended time on non-stationary data. It is a crucial problem to overcome when designing systems that learn continually. An effective technique for preventing loss of plasticity is reinitializing parts of the network. In this paper, we compare two different reinitialization schemes: reinitializing units vs reinitializing weights. We propose a new algorithm, which we name textit{selective weight reinitialization}, for reinitializing the least useful weights in a network. We compare our algorithm to continual backpropagation and ReDo, two previously proposed algorithms that reinitialize units in the network. Through our experiments in continual supervised learning problems, we identify two settings when reinitializing weights is more effective at maintaining plasticity than reinitializing units: (1) when the network has a small number of units and (2) when the network includes layer normalization. Conversely, reinitializing weights and units are equally effective at maintaining plasticity when the network is of sufficient size and does not include layer normalization. We found that reinitializing weights maintains plasticity in a wider variety of settings than reinitializing units. Read More 

Insights News
ai expo world 728x 90 01 rkd9Ys

Tencent releases versatile open-source Hunyuan AI modelsAI News

Tencent releases versatile open-source Hunyuan AI modelsAI Newson August 4, 2025 at 2:58 pm Tencent has expanded its family of open-source Hunyuan AI models that are versatile enough for broad use. This new family of models is engineered to deliver powerful performance across computational environments, from small edge devices to demanding, high-concurrency production systems. The release includes a comprehensive set of pre-trained and instruction-tuned models available on the developer
The post Tencent releases versatile open-source Hunyuan AI models appeared first on AI News.

 Tencent has expanded its family of open-source Hunyuan AI models that are versatile enough for broad use. This new family of models is engineered to deliver powerful performance across computational environments, from small edge devices to demanding, high-concurrency production systems. The release includes a comprehensive set of pre-trained and instruction-tuned models available on the developer
The post Tencent releases versatile open-source Hunyuan AI models appeared first on AI News. Read More