132 Comments

User's avatar
Freddie deBoer's avatar

These comments really demonstrate the permanent motte and bailey nature of this discussion

Geoff Olynyk's avatar

As someone in a can’t-afford-to-fuck-around field (nuclear energy, though I’m in business strategy/business development, not operations), we’ve been able to integrate AI pretty flawlessly into our workflow. What you have to drill into the junior business analysts is that it’s an idea-generator and a source for leads only, and that all actual facts and interpretation that show up on a page need to be the product of their own mind.

Make it clear that directly putting AI output on a page will lead to incorrect information being presented to executives and they will be held accountable when (not if) that happens. They’ll still do it, because people are lazy, but then it’s subject to performance management like any other employee deficiency.

If you treat it this way — let the AI come up with some basic contours of the story as a thought starter and link you to primary sources — it really can save a ton of time. I had ChatGPT Plus write me a basic primer on the electric grid of Ireland and the suitability of that grid for small modular reactors, and it honestly was fantastic. I’m experienced enough to know what it got wrong, and it was probably 95% correct. (Certainly nothing close to the garbage that this grad student seems to have received — I wonder if they prompted it with something about telling a story?) In 18 mins of “deep thinking” mode it probably did what would take a good business analyst two weeks.

Dumb people will use LLMs literally and their organizations will suffer the consequences. But in my experience most managers are pretty smart and are using them effectively.

130 more comments...

No posts

Ready for more?