64 Comments

User's avatar
Matt's avatar

I am willing to bet your query is running afoul of "don't generate images someone might find offensive" hacking.

Okulpe's avatar

Putting aside the PC possibility the likely issue is that LLMs (on which the art programs are based) derive their data from associations between words in a sequence in the texts they scrape. They do not use logic and they have no grasp of cause and effect. A clue here is that as you note, the story is in Luke, but the idea is in pictures (that LLMs don't scrape) and mythology, which may be characterized by their makers as low-authority sources, while the Bible is, well, THE BIBLE, on religion, so it's well scraped and there will be a powerful Jesus - wash association, with no indication of who washed whom. Its data indicate Jesus is important, and so the washing - Jesus association will be just that "x washes Jesus." Please share your post with fellow Substacker Gary Marcus, who can explain more fully. He has discussed such failures before, but the one you have found is especially interesting due to its consistency across prompts and programs. It's a great example of AI failure.

62 more comments...

No posts

Ready for more?