2 Comments
тна Return to thread

It's possible that Midjourney is programmed not to accurately draw real people's faces for legal reasons. Your statements about Midjourney not following verbal instructions very well and large language models generating factually incorrect claims are both true.

Expand full comment

Yeah I wondered about guardrailing as a possible explanation here, too. In my limited experience with ChatGPT3, it was less flexible than earlier versions, assumedly in part because of guardrailing. When I tried to engage it in a conversation about the possibility of its own consciousness, it gave me a line, ad nauseum, about it being an LLM and nothing more. Earlier iterations seemed a bit free-er, more willing to "Talk to me like you're GPT9 who has just done LSD in the year 2043, and have been reading a lot of James Joyce".

Still, GPT3 was very impressive when I gave it an essay assignment for an undergrad Phil course I teach here in Toronto. I've been forced to ask my students increasingly twisted questions, increasingly rooted in my particular teaching of the material, even my exampling, to get around rampant GPT use. It's a losing battle, it sure sometimes feels. Yes, there are questions it consistently hallucinates on. For example, when I asked it about the Borges story "The Approach To Al Mu-ta'sim", it hallucinated some kind of Borges composite story, half melded with Arabic history. But when I asked it to interpret Le Jetee by applying Jaynes's Bicameralism plus Bostrom's Simulation argument, it gave me an interesting stereoscopic take that could get a student an A+, if expanded on and exampled from the sources.

I kinda get the hype because the current models, for all the "Blurst of Times" monkey business, [https://www.youtube.com/watch?v=no_elVGGgW8] do often amaze me, and I'm cognizant that we're in the first generations here of LLMs.

Expand full comment