Google’s DeepMind and Isomorphic Labs unveil AlphaFold 3
Story: DeepMind, a subsidiary of Google's parent company Alphabet, has introduced Isomorphic-AlphaFold 3 (AF3), a greatly improved AI model that builds upon the success of its predecessor, AlphaFold 2, in predicting protein structures. AF3 represents brings the whole research community forward through AI-driven protein structure prediction, boasting a 50% improvement in accuracy compared to AlphaFold 2. The model's innovative isomorphic architecture allows it to handle proteins of varying sizes and complexities, making it a versatile tool for understanding the vast protein universe. Beyond its impressive performance in structure prediction, AF3 also demonstrates remarkable capabilities in protein design, opening up new possibilities for creating novel proteins with specific functions and properties.
Key Findings:
-
50% Improvement in Accuracy: AF3 achieves a remarkable 50% improvement in protein structure prediction accuracy compared to its predecessor, AlphaFold 2, setting a new standard in the field.
-
Isomorphic Architecture: The model's isomorphic architecture enables it to handle proteins of varying sizes and complexities, making it a versatile tool for understanding the diverse protein universe.
-
Protein Design Capabilities: AF3 demonstrates impressive capabilities in protein design, allowing researchers to create novel proteins with specific functions and properties, paving the way for groundbreaking applications in medicine, biotechnology, and beyond.
-
Accelerating Scientific Discovery: By providing accurate and reliable protein structure predictions, AF3 has the potential to accelerate scientific discovery across various domains, from drug development and disease research to materials science and environmental sustainability.
-
Open-Source Availability: DeepMind plans to make AF3 available through open-source channels, ensuring that the scientific community can benefit from its capabilities and build upon its success.
-
Collaboration with European Molecular Biology Laboratory: DeepMind has partnered with the European Molecular Biology Laboratory (EMBL) to integrate AF3 into the EMBL-EBI database, making its predictions accessible to researchers worldwide.
Pixit‘s Two Cents: AF3 is a another impressive achievement in AI-driven protein structure prediction and design. Its improved accuracy, versatility, and protein design capabilities have the potential to revolutionize our understanding of proteins and accelerate scientific discovery as many experts agree upon. DeepMind's commitment to open-source availability and collaboration with EMBL further shows their dedication to advancing scientific knowledge and empowering researchers worldwide. Also, since it is the third iteration of their “product” we can get a glimpse of how important and needed such a system is. What we found interesting as well is that it is the same transformer+diffusion backbone that generates pixels as Jim Fan mentions on LinkedIn.
OpenAI Introduces New Tool to Detect Generated Images
Story: OpenAI is increasing efforts to ensure the authenticity of digital content amidst the rising use of generative AI technologies in creating diverse media such as images, videos, and audio. As these technologies become more embedded in daily digital interactions, determining the origin of content is becoming crucial for maintaining trust online. OpenAI has embraced the challenge by joining the Coalition for Content Provenance and Authenticity (C2PA). In addition, they introduced a new tool to identify content created by DALL·E 3.
Key Findings:
-
Contribution to Authenticity Standards: OpenAI is joining the Steering Commitee of C2PA - a standard for digital content certification. Earlier this year, OpenAI added C2PA metadata to all images created and edited by DALL·E 3. C2PA will be integrated in Sora as well.
-
Societal Resilience Fund: OpenAI is joining Microsoft in launching a $2 million fund that supports AI education and understanding.
-
Image Detection Classifier: Starting from 7th of May, OpenAI is opening application to their image detection tool that predicts the likelihood that in image was generated by OpenAI’s DALL·E 3. Internal testing shows that the classifier has a high accuracy for distinguishing between non-AI generated images and those created by DALL·E 3 (AUC = 0.967)
-
Tamper-Resistant Watermarking: OpenAI is marking generated audio with an invisible signal that aims to be hard to remove.
Pixit‘s Two Cents: C2PA, image detection tools, and watermarking in audio signals are key to maintaining credibility and ensuring that their content stands up to scrutiny in the ever-evolving digital landscape. We’re happy to integrate such tools in our applications as well.
OpenAI: Introducing the Model Spec
Story: OpenAI has shared a first draft of the Model Spec, a document that specifies the broad objectives, detailed rules, and default behaviour of OpenAI’s models to ensure that AI interactions are safe, legal, and aligned with human values. The Model Spec combines past documentation, expert input, and current research described the desired model behaviour and how they evaluate tradeoffs when conflicts arise.
Key Findings:
-
Objectives: Broad, general principles that provide a directional sense of the desired behaviour (e.g., (1) assist the developer, (2) benefit humanity, and (3) reflect well on OpenAI)
-
Rules: Instructions that address complexity and help ensure safety and legality (e.g., (1) Follow the chain of command, (2) comply with applicable laws, and (3) don’t provide information hazards)
-
Default Behaviour: Guidelines that are consistent with objectives and rules, providing a template for handling conflicts and demonstrating how to prioritize and balance objectives (e.g., (1) assume best intentions from the user, (2) ask clarifying questions when necessary, and (3) be as helpful as possible without overstepping)
-
What’s Next: You can help sharing your thoughts about how models should behave, how desired model behaviour is determined, and how best to engage the genreal public in these discussions here
-
Examples: You can find lots of examples of the Model Spec on the same side.
Pixit‘s Two Cents: We’re always wondering what rules and behaviours were incorporated in ChatGPT - a tool we use a lot at Pixit. From a technical perspective it would be interesting to know how exactly you can integrate such objectives and ensure (force) the model to apply to them.
Small Bites, Big Stories:
-
OpenAI Introduces Media Manager, Shares Data and AI Approach: OpenAI introduces a Media Manager for creators to reflect their preferences and contribute to a beneficial social contract for content in the AI age.
-
Financial Times Partners with OpenAI to Enhance ChatGPT: The Financial Times (FT) announces a strategic partnership with OpenAI to enhance ChatGPT with attributed FT content and collaborate on developing new AI products for FT readers.
-
Stability AI Launches Stable Artisan for Media Generation on Discord: Stability AI launches Stable Artisan, a user-friendly bot for media generation and editing on Discord, powered by theit cutting-edge image and video models.
-
Meta Takes Action Against AI Girlfriend Ads on Facebook and Instagram: Meta cracks down on ads promoting AI girlfriends on its platforms, citing violations of its policies on adult content and services.
-
Anthropic Debuts Claude Chatbot as an iPhone App: Anthropic launches its Claude chatbot as an iPhone app, making its AI assistant more accessible to users on mobile devices.
-
X Launches Stories on X, Delivering News Summarized by Grok AI: X (formerly Twitter) introduces Stories on X, a new feature that provides users with a personalized news feed summarized by Grok AI, Elon Musk's AI company.
-
Eight Newspaper Publishers Sue OpenAI Over Copyright Infringement: Eight U.S. newspaper publishers file a lawsuit against Microsoft and OpenAI, alleging that their generative AI products infringe upon copyrights by using articles without authorization or remuneration.
-
GitHub Launches Copilot Workspace and Copilot Enterprise: GitHub introduces Copilot Workspace, a Copilot-native developer environment, and makes Copilot Enterprise generally available, offering organizations a customized AI solution for software development.
-
Meta Introduces Enhanced Generative AI Features and Tools for Businesses: Meta rolls out new generative AI features and tools to help businesses create more effective ads, improve ad performance, and drive better results across Facebook and Instagram.
-
The Unsexy Future of Generative AI: Enterprise Apps: As generative AI startups face challenges with revenue generation and high API costs, many are shifting their focus towards enterprise clients and developing more narrow, problem-solving applications tailored to specific business needs.
-
Microsoft: AI at Work Is Here, Now Comes the Hard Part: Microsoft and LinkedIn research reveals that 75% of global knowledge workers are using generative AI, with 78% bringing their own AI tools to work.
-
Eric Schmidt-backed Augment, a GitHub Copilot Rival, Launches with $252M: Augment, founded by former Microsoft and Google employees, emerges from stealth with $252 million in funding to compete with GitHub Copilot and other AI coding assistants, as 44% of software engineers already use AI tools and 26% plan to do so soon.
-
Stack Overflow and OpenAI Partner to Strengthen Large Language Models: Stack Overflow and OpenAI announce an API. It should improve model performance and provide attribution to the Stack Overflow community within ChatGPT. Users are partly not happy and already started deleting their content.
Tags:
May 13, 2024 9:18:17 AM