Tag: security implications
-
The Register: Hijacker helper VoidProxy boosts Google, Microsoft accounts on demand
Source URL: https://www.theregister.com/2025/09/11/voidproxy_phishing_service/ Source: The Register Title: Hijacker helper VoidProxy boosts Google, Microsoft accounts on demand Feedly Summary: Okta uncovers new phishing-as-a-service operation with ‘multiple entities’ falling victim Multiple attackers using a new phishing service dubbed VoidProxy to target organizations’ Microsoft and Google accounts have successfully stolen users’ credentials, multi-factor authentication codes, and session tokens…
-
The Register: More packages poisoned in npm attack, but would-be crypto thieves left pocket change
Source URL: https://www.theregister.com/2025/09/09/npm_supply_chain_attack/ Source: The Register Title: More packages poisoned in npm attack, but would-be crypto thieves left pocket change Feedly Summary: Miscreants cost victims time rather than money During the two-hour window on Monday in which hijacked npm versions were available for download, malware-laced packages reached one in 10 cloud environments, according to Wiz…
-
The Register: Salt Typhoon used dozens of domains, going back five years. Did you visit one?
Source URL: https://www.theregister.com/2025/09/08/salt_typhoon_domains/ Source: The Register Title: Salt Typhoon used dozens of domains, going back five years. Did you visit one? Feedly Summary: Plus ties to the Chinese spies who hacked Barracuda email gateways Security researchers have uncovered dozens of domains used by Chinese espionage crew Salt Typhoon to gain stealthy, long-term access to victim…
-
Wired: Psychological Tricks Can Get AI to Break the Rules
Source URL: https://arstechnica.com/science/2025/09/these-psychological-tricks-can-get-llms-to-respond-to-forbidden-prompts/ Source: Wired Title: Psychological Tricks Can Get AI to Break the Rules Feedly Summary: Researchers convinced large language model chatbots to comply with “forbidden” requests using a variety of conversational tactics. AI Summary and Description: Yes Summary: The text discusses researchers’ exploration of conversational tactics used to manipulate large language model (LLM)…
-
Unit 42: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust
Source URL: https://unit42.paloaltonetworks.com/model-namespace-reuse/ Source: Unit 42 Title: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust Feedly Summary: Model namespace reuse is a potential security risk in the AI supply chain. Attackers can misuse platforms like Hugging Face for remote code execution. The post Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model…
-
NCSC Feed: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards
Source URL: https://www.ncsc.gov.uk/blog-post/from-bugs-to-bypasses-adapting-vulnerability-disclosure-for-ai-safeguards Source: NCSC Feed Title: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards Feedly Summary: Exploring how far cyber security approaches can help mitigate risks in generative AI systems AI Summary and Description: Yes Summary: The text addresses the intersection of cybersecurity strategies and generative AI systems, highlighting how established cybersecurity…