Tag: large language models

  • Slashdot: Sam Altman Says Bots Are Making Social Media Feel ‘Fake’

    Source URL: https://tech.slashdot.org/story/25/09/09/0048216/sam-altman-says-bots-are-making-social-media-feel-fake?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Sam Altman Says Bots Are Making Social Media Feel ‘Fake’ Feedly Summary: AI Summary and Description: Yes Summary: The text discusses Sam Altman’s observations on the prevalence of bots and AI-generated content on social media platforms, particularly regarding the OpenAI Codex. Altman expresses concern about the authenticity of social…

  • Cloud Blog: Registration now open: Our no-cost, generative AI training and certification program for veterans

    Source URL: https://cloud.google.com/blog/topics/training-certifications/register-for-the-gen-ai-training-and-certification-program-for-veterans/ Source: Cloud Blog Title: Registration now open: Our no-cost, generative AI training and certification program for veterans Feedly Summary: Growing up in a Navy family instilled a strong sense of purpose in me. My father’s remarkable 42 years of naval service not only shaped my values, but inspired me to join the…

  • Slashdot: Microsoft’s Analog Optical Computer Shows AI Promise

    Source URL: https://hardware.slashdot.org/story/25/09/08/0125250/microsofts-analog-optical-computer-shows-ai-promise?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Microsoft’s Analog Optical Computer Shows AI Promise Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a project by Microsoft Research involving an analog optical computer (AOC) designed for AI workloads, significantly enhancing computation speed and energy efficiency compared to traditional GPUs. The initiative offers opportunities for…

  • Simon Willison’s Weblog: Is the LLM response wrong, or have you just failed to iterate it?

    Source URL: https://simonwillison.net/2025/Sep/7/is-the-llm-response-wrong-or-have-you-just-failed-to-iterate-it/#atom-everything Source: Simon Willison’s Weblog Title: Is the LLM response wrong, or have you just failed to iterate it? Feedly Summary: Is the LLM response wrong, or have you just failed to iterate it? More from Mike Caulfield (see also the SIFT method). He starts with a fantastic example of Google’s AI mode…

  • Wired: Psychological Tricks Can Get AI to Break the Rules

    Source URL: https://arstechnica.com/science/2025/09/these-psychological-tricks-can-get-llms-to-respond-to-forbidden-prompts/ Source: Wired Title: Psychological Tricks Can Get AI to Break the Rules Feedly Summary: Researchers convinced large language model chatbots to comply with “forbidden” requests using a variety of conversational tactics. AI Summary and Description: Yes Summary: The text discusses researchers’ exploration of conversational tactics used to manipulate large language model (LLM)…

  • Simon Willison’s Weblog: Kimi-K2-Instruct-0905

    Source URL: https://simonwillison.net/2025/Sep/6/kimi-k2-instruct-0905/#atom-everything Source: Simon Willison’s Weblog Title: Kimi-K2-Instruct-0905 Feedly Summary: Kimi-K2-Instruct-0905 New not-quite-MIT licensed model from Chinese Moonshot AI, a follow-up to the highly regarded Kimi-K2 model they released in July. This one is an incremental improvement – I’ve seen it referred to online as “Kimi K-2.1". It scores a little higher on a…

  • Slashdot: Switzerland Releases Open-Source AI Model Built For Privacy

    Source URL: https://news.slashdot.org/story/25/09/03/2125252/switzerland-releases-open-source-ai-model-built-for-privacy?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Switzerland Releases Open-Source AI Model Built For Privacy Feedly Summary: AI Summary and Description: Yes Summary: Switzerland’s launch of Apertus, a fully open-source multilingual LLM, emphasizes transparency and privacy in AI development. By providing open access to the model’s components and adhering to stringent Swiss data protection laws, Apertus…

  • The Register: Crims claim HexStrike AI penetration tool makes quick work of Citrix bugs

    Source URL: https://www.theregister.com/2025/09/03/hexstrike_ai_citrix_exploits/ Source: The Register Title: Crims claim HexStrike AI penetration tool makes quick work of Citrix bugs Feedly Summary: LLMs and 0-days – what could possibly go wrong? Attackers on underground forums claimed they were using HexStrike AI, an open-source red-teaming tool, against Citrix NetScaler vulnerabilities within hours of disclosure, according to Check…

  • The Register: It looks like you’re ransoming data. Would you like some help?

    Source URL: https://www.theregister.com/2025/09/03/ransomware_ai_abuse/ Source: The Register Title: It looks like you’re ransoming data. Would you like some help? Feedly Summary: AI-powered ransomware, extortion chatbots, vibe hacking … just wait until agents replace affiliates It’s no secret that AI tools make it easier for cybercriminals to steal sensitive data and then extort victim organizations. But two…