AI news August 2024: EU AI Act, Microsoft’s Phi-3.5 AI models, and ChatGPT’s 200M users

Eva Slonkova
September 26, 2024
As of August 1, 2024, the EU AI Act has officially become a legal framework for AI governance. This pioneering legislation aims to ensure that AI systems operating within the EU are safe and reliable, categorizing applications based on risk levels and imposing appropriate obligations.

Ready to grow your startup in the emerging AI ecosystem?

Let's have a chat!

Apply now

As of August 1, 2024, the EU AI Act has officially become a legal framework for AI governance. This pioneering legislation aims to ensure that AI systems operating within the EU are safe and reliable, categorizing applications based on risk levels and imposing appropriate obligations.

The following AI news August 2024 article summarizes key developments in the AI landscape throughout the last month, highlighting the implications of the AI Act and other notable advancements in the sector.

The key AI news of August 2024 include:

  1. Europe’s AI Act takes effect
  2. OpenAI and Anthropic partner with the U.S. AI Safety Institute
  3. Microsoft unveils new Phi-3.5 AI models
  4. ChatGPT doubles weekly users to 200 million

1. Europe’s AI Act takes effect, setting a global precedent

The European Union’s AI Act—the first comprehensive law governing artificial intelligence—officially came into force on August 1. Designed to ensure that AI systems used in the EU are both safe and trustworthy, the Act establishes a risk-based framework to categorize AI applications and impose corresponding obligations.

The regulation outlines four risk categories: minimal, specific transparency, high, and unacceptable. Minimal-risk AI systems, such as spam filters, face no obligations, while AI applications with transparency risks, like chatbots and deep fakes, must be clearly disclosed to users.

High-risk AI systems—those involved in areas like recruitment and credit assessment—must meet strict safety and transparency requirements, including human oversight and data quality controls. AI systems posing an “unacceptable risk,” such as those used for social scoring or manipulative behaviors, are banned entirely.

Member States have until August 2025 to establish regulatory bodies to enforce the Act. To prepare for full enforcement by 2026, the EU has introduced the AI Pact, encouraging companies to comply voluntarily ahead of time. Non-compliance could result in fines of up to 7% of global turnover for severe violations.

2. OpenAI and Anthropic partner with U.S. AI Safety Institute for pre-release model testing

OpenAI and Anthropic, two leading AI startups, have agreed to allow the U.S. AI Safety Institute to test their models before public release. This decision is being made as worries about AI safety and ethics in the industry continue to increase. The U.S. AI Safety Institute, operating under the National Institute of Standards and Technology (NIST), will gain access to these companies’ new models both before and after release, aiming to assess and mitigate potential risks.

Following the U.S. government’s first-ever executive order on AI, this collaboration issued in 2023 emphasizes the need for safety assessments and research on AI’s societal impact.

OpenAI’s CEO, Sam Altman, expressed support for the agreement, while Anthropic’s co-founder, Jack Clark, highlighted the importance of strict testing to ensure responsible AI development. The agreement also allows joint research to evaluate AI capabilities and address safety concerns.

3. Microsoft unveils new Phi-3.5 AI models

In August 2024, Microsoft introduced three new models in its Phi-3.5 series, marking a notable advancement in multilingual and multimodal AI. These models—Phi-3.5 Mini Instruct, Phi-3.5 MoE (Mixture of Experts), and Phi-3.5 Vision Instruct—are designed for diverse tasks including reasoning, code generation, and image analysis.

The Phi-3.5 Mini Instruct model, optimized for memory-constrained environments, excels in logic-based reasoning. The Phi-3.5 MoE model uses an innovative architecture to deliver scalable AI performance, outperforming larger models in various benchmarks. Finally, Phi-3.5 Vision Instruct integrates text and image processing for tasks such as optical character recognition and video summarization.

All models are available on Hugging Face under an open MIT license, allowing developers to freely use and modify them. Microsoft’s open-source approach aims to boost innovation across the AI community while maintaining state-of-the-art performance.

4. ChatGPT doubles weekly users to 200 million

OpenAI’s ChatGPT has reached 200 million weekly active users, doubling its numbers since last November, according to Axios. An OpenAI spokesperson confirmed that 92% of Fortune 500 companies now use OpenAI’s products, with API usage also increasing after the launch of the more affordable GPT-4o Mini.

ChatGPT continues to face competition from AI chat platforms such as Google, Microsoft, and Meta. Meta’s assistant now has 185 million weekly users.


Follow more AI news:

Share:

Related Posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram