We're having a similar issue at my firm. We have a new client that wants to automatize AI translations with ChatGPT. The problem is that ChatGPT sucks at translation. DeepL is much better - and it would cost that company much less money if they get a monthly subscription. It still needs refinement, obviously, but sucks much less than Cha…
We're having a similar issue at my firm. We have a new client that wants to automatize AI translations with ChatGPT. The problem is that ChatGPT sucks at translation. DeepL is much better - and it would cost that company much less money if they get a monthly subscription. It still needs refinement, obviously, but sucks much less than ChatGPT. But they really want to use ChatGPT and I guess it's because it's the new hot shiny AI Tool on the block, whereas DeepL has been around for a while now and is less compelling. They are seriously willing to waste thousands of dollars on a tool that will deliver lower-quality texts, just to say they "implemented AI". It's absurd.
Generally speaking I find the same issue as your sister-in-law, it's difficult to use GPT because most of the copy is bad and cannot be trusted anyways due to its propensity to hallucinate so it's kind of useless for research. If we have to double-check everything it says, we might as well do the research ourselves - that would be like working with a colleague who just makes shit up occasionally, why would you assign any tasks to someone who was known to do that? (In fact, such a person would probably be fired.) I've been trying to find ways to implement it into our workflows but there's not much either except letting it write the boring copy, like product descriptions. (And often it doesn't integrate the keywords like we ask).
We're having a similar issue at my firm. We have a new client that wants to automatize AI translations with ChatGPT. The problem is that ChatGPT sucks at translation. DeepL is much better - and it would cost that company much less money if they get a monthly subscription. It still needs refinement, obviously, but sucks much less than ChatGPT. But they really want to use ChatGPT and I guess it's because it's the new hot shiny AI Tool on the block, whereas DeepL has been around for a while now and is less compelling. They are seriously willing to waste thousands of dollars on a tool that will deliver lower-quality texts, just to say they "implemented AI". It's absurd.
Generally speaking I find the same issue as your sister-in-law, it's difficult to use GPT because most of the copy is bad and cannot be trusted anyways due to its propensity to hallucinate so it's kind of useless for research. If we have to double-check everything it says, we might as well do the research ourselves - that would be like working with a colleague who just makes shit up occasionally, why would you assign any tasks to someone who was known to do that? (In fact, such a person would probably be fired.) I've been trying to find ways to implement it into our workflows but there's not much either except letting it write the boring copy, like product descriptions. (And often it doesn't integrate the keywords like we ask).