Skip to main content
Made with ❤️ by Pixit
Made with ❤️ by Pixit

Google’s DeepMind and Isomorphic Labs unveil AlphaFold 3

alphafoldStory: DeepMind, a subsidiary of Google's parent company Alphabet, has introduced Isomorphic-AlphaFold 3 (AF3), a greatly improved AI model that builds upon the success of its predecessor, AlphaFold 2, in predicting protein structures. AF3 represents brings the whole research community forward through AI-driven protein structure prediction, boasting a 50% improvement in accuracy compared to AlphaFold 2. The model's innovative isomorphic architecture allows it to handle proteins of varying sizes and complexities, making it a versatile tool for understanding the vast protein universe. Beyond its impressive performance in structure prediction, AF3 also demonstrates remarkable capabilities in protein design, opening up new possibilities for creating novel proteins with specific functions and properties.

Key Findings:

  • 50% Improvement in Accuracy: AF3 achieves a remarkable 50% improvement in protein structure prediction accuracy compared to its predecessor, AlphaFold 2, setting a new standard in the field.

  • Isomorphic Architecture: The model's isomorphic architecture enables it to handle proteins of varying sizes and complexities, making it a versatile tool for understanding the diverse protein universe.

  • Protein Design Capabilities: AF3 demonstrates impressive capabilities in protein design, allowing researchers to create novel proteins with specific functions and properties, paving the way for groundbreaking applications in medicine, biotechnology, and beyond.

  • Accelerating Scientific Discovery: By providing accurate and reliable protein structure predictions, AF3 has the potential to accelerate scientific discovery across various domains, from drug development and disease research to materials science and environmental sustainability.

  • Open-Source Availability: DeepMind plans to make AF3 available through open-source channels, ensuring that the scientific community can benefit from its capabilities and build upon its success.

  • Collaboration with European Molecular Biology Laboratory: DeepMind has partnered with the European Molecular Biology Laboratory (EMBL) to integrate AF3 into the EMBL-EBI database, making its predictions accessible to researchers worldwide.

Pixit‘s Two Cents: AF3 is a another impressive achievement in AI-driven protein structure prediction and design. Its improved accuracy, versatility, and protein design capabilities have the potential to revolutionize our understanding of proteins and accelerate scientific discovery as many experts agree upon. DeepMind's commitment to open-source availability and collaboration with EMBL further shows their dedication to advancing scientific knowledge and empowering researchers worldwide. Also, since it is the third iteration of their “product” we can get a glimpse of how important and needed such a system is. What we found interesting as well is that it is the same transformer+diffusion backbone that generates pixels as Jim Fan mentions on LinkedIn.

OpenAI Introduces New Tool to Detect Generated Images

cat

Story: OpenAI is increasing efforts to ensure the authenticity of digital content amidst the rising use of generative AI technologies in creating diverse media such as images, videos, and audio. As these technologies become more embedded in daily digital interactions, determining the origin of content is becoming crucial for maintaining trust online. OpenAI has embraced the challenge by joining the Coalition for Content Provenance and Authenticity (C2PA). In addition, they introduced a new tool to identify content created by DALL·E 3.

Key Findings:

  • Contribution to Authenticity Standards: OpenAI is joining the Steering Commitee of C2PA - a standard for digital content certification. Earlier this year, OpenAI added C2PA metadata to all images created and edited by DALL·E 3. C2PA will be integrated in Sora as well.

  • Societal Resilience Fund: OpenAI is joining Microsoft in launching a $2 million fund that supports AI education and understanding.

  • Image Detection Classifier: Starting from 7th of May, OpenAI is opening application to their image detection tool that predicts the likelihood that in image was generated by OpenAI’s DALL·E 3. Internal testing shows that the classifier has a high accuracy for distinguishing between non-AI generated images and those created by DALL·E 3 (AUC = 0.967)

  • Tamper-Resistant Watermarking: OpenAI is marking generated audio with an invisible signal that aims to be hard to remove.

Pixit‘s Two Cents: C2PA, image detection tools, and watermarking in audio signals are key to maintaining credibility and ensuring that their content stands up to scrutiny in the ever-evolving digital landscape. We’re happy to integrate such tools in our applications as well.


OpenAI: Introducing the Model Spec

openai

Story: OpenAI has shared a first draft of the Model Spec, a document that specifies the broad objectives, detailed rules, and default behaviour of OpenAI’s models to ensure that AI interactions are safe, legal, and aligned with human values. The Model Spec combines past documentation, expert input, and current research described the desired model behaviour and how they evaluate tradeoffs when conflicts arise.

Key Findings:

  • Objectives: Broad, general principles that provide a directional sense of the desired behaviour (e.g., (1) assist the developer, (2) benefit humanity, and (3) reflect well on OpenAI)

  • Rules: Instructions that address complexity and help ensure safety and legality (e.g., (1) Follow the chain of command, (2) comply with applicable laws, and (3) don’t provide information hazards)

  • Default Behaviour: Guidelines that are consistent with objectives and rules, providing a template for handling conflicts and demonstrating how to prioritize and balance objectives (e.g., (1) assume best intentions from the user, (2) ask clarifying questions when necessary, and (3) be as helpful as possible without overstepping)

  • What’s Next: You can help sharing your thoughts about how models should behave, how desired model behaviour is determined, and how best to engage the genreal public in these discussions here

  • Examples: You can find lots of examples of the Model Spec on the same side.

Pixit‘s Two Cents: We’re always wondering what rules and behaviours were incorporated in ChatGPT - a tool we use a lot at Pixit. From a technical perspective it would be interesting to know how exactly you can integrate such objectives and ensure (force) the model to apply to them.


Small Bites, Big Stories:

Tags:
Pix
Post by Pix
May 13, 2024 9:18:17 AM