Skip to yearly menu bar Skip to main content


A Vision Check-up for Language Models

Pratyusha Sharma · Tamar Rott Shaham · Manel Baradad · Stephanie Fu · Adrian Rodriguez-Munoz · Shivam Duggal · Phillip Isola · Antonio Torralba

Arch 4A-E Poster #460
award Highlight
[ ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT


What does learning to model relationships between strings teach Large Language Models (LLMs) about the visual world? We systematically evaluate LLMs’ abilities to generate and recognize an assortment of visual concepts of increasing complexity and then demonstrate how a preliminary visual representation learning system can be trained using models of text. As language models lack the ability to consume or output visual information as pixels, we use code to represent images in our study. Although LLM-generated images do not look like natural images, results on image generation and the ability of models to correct these generated images indicate that precise modeling of strings can teach language models about numerous aspects of the visual world. Furthermore, experiments on self-supervised visual representation learning, utilizing images generated with text models, highlight the potential to train vision models capable of making semantic assessments of natural images using just LLMs.

Live content is unavailable. Log in and register to view live content