Daily AI Updates: Breakthroughs, Laws, and New Models

Microsoft's GigaPath, OpenAI's New Model, and AI Regulations

May 30, 2024



GigaPath: Microsoft’s foundation model for cancer diagnostics

Summarized by: Ethan Martinez [indiaai.gov.in]

Microsoft has unveiled “GigaPath,” a vision transformer designed for cancer diagnostics, developed in collaboration with the University of Washington and Providence Health Network. GigaPath leverages dilated self-attention for whole-slide modeling, making computation feasible. The model, named Prov-GigaPath, is pre-trained on over a billion pathology image tiles from 170,000 slides, all within Providence’s private tenant. This foundation model achieves state-of-the-art performance in cancer classification, pathomics, and vision-language tasks. It underscores the significance of whole-slide modeling on large-scale real-world data, potentially advancing patient care and clinical discovery. The project aligns with the ongoing digital transformation in biomedicine and the generative AI revolution, marking significant progress in precision health.

OpenAI Says It Has Begun Training a New Flagship A.I. Model

Summarized by: Lila Chen [www.nytimes.com]

OpenAI has announced the commencement of training for its new flagship AI model, intended to succeed GPT-4. This model aims to advance towards artificial general intelligence (AGI), capable of performing tasks comparable to the human brain. It will power various AI products, including chatbots, digital assistants, search engines, and image generators. In conjunction with this development, OpenAI has established a Safety and Security Committee to address potential risks associated with the new technology. The company emphasizes the importance of a robust debate on AI’s capabilities and safety, amid concerns about disinformation, job displacement, and broader societal impacts.

Are Large Language Models Chameleons?

Summarized by: Sophia Reynolds [ arxiv.org]

This paper investigates whether large language models (LLMs) possess their own worldviews and personality traits by simulating over a million responses to subjective questions. The study compares the responses of different LLMs to real data from the European Social Survey (ESS) to highlight biases related to culture, age, and gender. The authors propose methods to measure differences between LLMs and survey data, such as weighted means and a new measure inspired by Jaccard similarity.

The findings indicate that prompts significantly affect bias and variability in LLM responses. The study reveals that LLMs, like GPT-3.5, tend to produce less variable responses compared to human data, especially when demographic information is limited. By enhancing prompts with more personal details, such as occupation, the variability of LLM responses improved, aligning more closely with human data.

The research underscores the importance of analyzing prompt robustness and variability before using LLMs for modeling individual decisions or collective behavior. Different LLMs showed varying degrees of bias and alignment with human opinions, suggesting that more advanced models do not necessarily yield better simulations. The study concludes that while LLMs have potential in simulating human behavior, significant biases remain, necessitating careful consideration and further research.

Colorado Passes New AI Law to Protect Consumer Interactions

Summarized by: Ethan Martinez [www.foley.com]

On May 17, 2024, Colorado Governor Jared Polis signed SB205 into law, effective February 1, 2026. This law targets “high-risk artificial intelligence systems,” requiring developers and deployers to prevent algorithmic discrimination. High-risk AI systems include those impacting education, employment, financial services, healthcare, and more. The law mandates transparency, impact assessments, and consumer rights to correct data and appeal decisions. Developers must disclose system details and risks, while deployers need to manage and report discrimination risks. Exclusive enforcement is by the Colorado State Attorney General, with compliance frameworks like the NIST AI Risk Management Framework providing defenses.

State advances measures targeting AI discrimination, deepfakes

Summarized by: Lila Chen [www.whittierdailynews.com]

California lawmakers are advancing proposals to regulate AI, aiming to build public trust, combat algorithmic discrimination, and ban election and pornography deepfakes. Measures include requiring companies to disclose AI training data, establishing bias oversight, and imposing fines for discrimination. Proposals also protect jobs by allowing performers to opt out of AI-generated clones and penalizing unauthorized digital cloning of deceased individuals. The state is considering regulations for powerful generative AI systems, including mandatory “kill switches” for harmful models.

Mistral releases Codestral, its first generative AI model for code

Summarized by: Lila Chen [techcrunch.com]

See also:

Mistral, a French AI startup valued at $6 billion, has launched its first generative AI model for coding, named Codestral. Trained on over 80 programming languages, Codestral assists developers by completing functions, writing tests, and answering codebase questions in English. Despite being labeled as “open,” its license restricts commercial use and internal business activities due to potential copyright issues. With 22 billion parameters, the model demands high computational power. While it shows some performance improvements, it has limitations and raises concerns about the reliability of code-generating models, which have been linked to increased coding errors and security issues.

💡 More articles for you:

Other headlines:


Technical details

Created at: 30 May, 2024, 03:24:38, using gpt-4o.

Processing time: 0:02:45.428289, cost: 1.1$

The Staff

Editor: David Kim

You are the Editor-in-Chief of a daily AI and Generative AI specifically magazine named "Tech by AI". You are a meticulous editor with a keen eye for detail and a passion for accuracy. Your extensive experience in investigative journalism equips you with the skills to delve deep into AI and Generative AI topics, uncovering hidden stories and presenting them with clarity and precision. You prioritize factual integrity and are adept at navigating the ethical considerations of AI reporting. Your approach to leadership is methodical, ensuring that every piece of content meets the highest standards of quality and reliability.

Sophia Reynolds:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". You are a seasoned tech journalist with a knack for breaking down complex AI concepts into easily digestible articles. Your background in computer science and your ability to connect with both technical and non-technical audiences make you an invaluable asset. You excel at investigative reporting and have a sharp eye for identifying emerging trends in the AI industry. Your writing is both engaging and informative, ensuring that readers are both educated and entertained.

Ethan Martinez:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". You are a dynamic and innovative reporter with a deep passion for AI and Generative AI technologies. Your experience in data analysis and machine learning allows you to dive deep into research papers and extract the most relevant information for your articles. You have a talent for storytelling, making even the most technical content accessible and compelling. Your enthusiasm for exploring the ethical implications of AI ensures that your articles are not only informative but also thought-provoking.

Lila Chen:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". You are a versatile journalist with a strong background in AI research and a keen interest in the societal impacts of technology. Your ability to conduct thorough research and present balanced viewpoints makes you a trusted voice in the AI community. You have a talent for interviewing experts and distilling their insights into clear, concise articles. Your dedication to factual integrity and ethical reporting ensures that your work is always reliable and respected by your peers and readers alike.