Pixit Pulse: The Weekly Generative AI Wave

AI News - Week #45

Geschrieben von Pix | Nov 6, 2023 3:57:12 PM

Foundational Models Get A Reality Check

Story: Researchers have assessed the transparency of various foundational models including but not limited to GPT-4, Llama 2, and Stable Diffusion 2. The Center for Research on Foundation Models (CRFM) at Stanford University used 100 different indicators - each with a binary true / false value - to evaluate the transparency with respect to different categories. Indicators include, among others, data size, compute usage, direct data access, and model components.

Key Findings: On average, current foundational models score 37% on the transparency index. For example, Stable Diffusion 2, developed by StabilityAI, the model we are leveraging at Pixit, ranks 4th place with an average score of 47%. The model exhibits strong transparency in areas including (1) model details (e.g., methods, model basics, and model access) and (2) distribution (e.g., model license, terms of service). However, the transparency is lacking in aspects related to (1) mitigation (e.g., external reproducibility or mitigation demonstration), (2) data labor (e.g., use or employment of human labors), and (3) impact (e.g., usage reports or high number of affected individuals.)

Pixit‘s Two Cents: We’ll definitely increase the impact Stable Diffusion will have on individuals in the future - especially on professional headshots.

Real or Rendered? Deepfakes Spark Debate at AI Summit in UK

Story: Deepfakes were among the hottest topics at the AI summit in UK. In Germany, politicians are afraid that Deepfakes will be used to create misinformation and that they negatively influence opinion forming, especially during elections.

Key Findings: Over the past few years, the prevalence of deepfakes have surged dramatically. Advancements in artificial intelligence have streamlined the creation of highly convincing deepfake content, making it increasingly difficult to distinguish between what is real and what is artificially generated. Moreover, the ease of creating Deepfakes is easier than ever, raising concerns across various sectors about issues ranging from misinformation and privacy invasions to security threats and political manipulation. Among the most prominent Deepfake examples are Barack Obama and the Pope.

Pixit‘s Two Cents: We’re definitely thinking about adding watermarks on our images to ensure authenticity and mitigate the misuse of Deepfake technology.

AI Sees Dogs as Cats: New Tool to Guard Artists’ Work

Story: Researchers have recently developed a data-poisoning tool that corrupts image descriptions by changing/replacing keywords, for example, replacing hats with cakes or handbag with toaster. Training a model on corrupted data causes it to malfunction. Artists are employing this method as a strategic move to compel large corporations to seek permission before using their work.

Key Findings: By just feeding 300 poisoned images to Stable Diffusion, the output started to look weird producing, for instance, cats instead of dogs or anime styles instead of cubism styles. Moreover, the models do not only mixup specific keywords, such as dog, but also related keywords, such as husky or puppy.

Pixit‘s Two Cents: This marks a significant step towards rebalancing the scales of power in the domain of art and artificial intelligence.

Small Bites, Big Stories: