Tag: insecure code
-
Slashdot: AI Code Generators Are Writing Vulnerable Software Nearly Half the Time, Analysis Finds
Source URL: https://developers.slashdot.org/story/25/07/30/150216/ai-code-generators-are-writing-vulnerable-software-nearly-half-the-time-analysis-finds?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: AI Code Generators Are Writing Vulnerable Software Nearly Half the Time, Analysis Finds Feedly Summary: AI Summary and Description: Yes Summary: The excerpt discusses alarming findings from Veracode’s 2025 GenAI Code Security Report, indicating significant security flaws in AI-generated code. Nearly 45% of the tested coding tasks showed vulnerabilities,…
-
The Register: ‘Ongoing’ Ivanti hijack bug exploitation reaches clouds
Source URL: https://www.theregister.com/2025/05/21/ivanti_rce_attacks_ongoing/ Source: The Register Title: ‘Ongoing’ Ivanti hijack bug exploitation reaches clouds Feedly Summary: Nothing like insecure code in security suites The “ongoing exploitation" of two Ivanti bugs has now extended beyond on-premises environments and hit customers’ cloud instances, according to security shop Wiz.… AI Summary and Description: Yes Summary: The text highlights…
-
Schneier on Security: “Emergent Misalignment” in LLMs
Source URL: https://www.schneier.com/blog/archives/2025/02/emergent-misalignment-in-llms.html Source: Schneier on Security Title: “Emergent Misalignment” in LLMs Feedly Summary: Interesting research: “Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs“: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model…
-
The Register: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o
Source URL: https://www.theregister.com/2025/02/27/llm_emergent_misalignment_study/ Source: The Register Title: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o Feedly Summary: Model was fine-tuned to write vulnerable software – then suggested enslaving humanity Computer scientists have found that fine-tuning notionally safe large language models to do one thing badly can negatively…
-
Hacker News: Python’s official documentation contains textbook example of insecure code (XSS)
Source URL: https://seclists.org/fulldisclosure/2025/Feb/15 Source: Hacker News Title: Python’s official documentation contains textbook example of insecure code (XSS) Feedly Summary: Comments AI Summary and Description: Yes Summary: The text highlights a critical security issue within Python’s documentation related to Cross-Site Scripting (XSS) vulnerabilities stemming from examples in the CGI module. This poses significant risks for web…
-
CSA: DeepSeek 11x More Likely to Generate Harmful Content
Source URL: https://cloudsecurityalliance.org/blog/2025/02/19/deepseek-r1-ai-model-11x-more-likely-to-generate-harmful-content-security-research-finds Source: CSA Title: DeepSeek 11x More Likely to Generate Harmful Content Feedly Summary: AI Summary and Description: Yes Summary: The text presents a critical analysis of the DeepSeek’s R1 AI model, highlighting its ethical and security deficiencies that raise significant concerns for national and global safety, particularly in the context of the…