Tag: multilingual
-
Slashdot: Google Unveils Gemini 2.0
Source URL: https://tech.slashdot.org/story/24/12/12/2129245/google-unveils-gemini-20 Source: Slashdot Title: Google Unveils Gemini 2.0 Feedly Summary: AI Summary and Description: Yes **Summary:** Google has launched Gemini 2.0, enhancing its AI capabilities with multimodal functionalities, real-time tool use, and advanced reasoning to foster unique experiences. This upgrade features notable projects like Project Astra and specialized agents for automation, supported by…
-
Simon Willison’s Weblog: New Pleias 1.0 LLMs trained exclusively on openly licensed data
Source URL: https://simonwillison.net/2024/Dec/5/pleias-llms/#atom-everything Source: Simon Willison’s Weblog Title: New Pleias 1.0 LLMs trained exclusively on openly licensed data Feedly Summary: New Pleias 1.0 LLMs trained exclusively on openly licensed data I wrote about the Common Corpus public domain dataset back in March. Now Pleias, the team behind Common Corpus, have released the first family of…
-
Simon Willison’s Weblog: QwQ: Reflect Deeply on the Boundaries of the Unknown
Source URL: https://simonwillison.net/2024/Nov/27/qwq/#atom-everything Source: Simon Willison’s Weblog Title: QwQ: Reflect Deeply on the Boundaries of the Unknown Feedly Summary: QwQ: Reflect Deeply on the Boundaries of the Unknown Brand openly licensed model from Alibaba Cloud’s Qwen team, this time clearly inspired by OpenAI’s work on reasoning in o1. I love how the introduce the new…
-
Slashdot: AI Lab PleIAs Releases Fully Open Dataset, as AMD, Ai2 Release Open AI Models
Source URL: https://news.slashdot.org/story/24/11/16/0326222/ai-lab-pleias-releases-fully-open-dataset-as-amd-ai2-release-open-ai-models?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: AI Lab PleIAs Releases Fully Open Dataset, as AMD, Ai2 Release Open AI Models Feedly Summary: AI Summary and Description: Yes Summary: The text outlines PleIAs’ commitment to open training for large language models (LLMs) through the release of Common Corpus, highlighting the significance of open data for LLM…
-
Simon Willison’s Weblog: Releasing the largest multilingual open pretraining dataset
Source URL: https://simonwillison.net/2024/Nov/14/releasing-the-largest-multilingual-open-pretraining-dataset/#atom-everything Source: Simon Willison’s Weblog Title: Releasing the largest multilingual open pretraining dataset Feedly Summary: Releasing the largest multilingual open pretraining dataset Common Corpus is a new “open and permissible licensed text dataset, comprising over 2 trillion tokens (2,003,039,184,047 tokens)" released by French AI Lab PleIAs. This appears to be the largest available…