In the previous post we lauded some
of the wonders of which the new generation of chatbots is capable. In this post
it we will look at some of the drawbacks.
To be fair, It must be said that
the home screen of ChatGPT displays a whole list of caveats: not up to date by
several years, may occasionally produce biased content, may give information
that is plain wrong, etc. The internet is buzzing with examples. Rather than
make an invidious selection, I decided to conduct my own experiment. Here’s a
There’s is a long-established
Irish-Spanish family in the city of Valencia, Spain, where I Iive, which played a
prominent role in its modernisation in the nineteenth and early twentieth centuries.
The family name is Trenor. I’d heard of them but wanted to know more. To my
consternation, the answer came back from ChatGPT that it had no information
about the family. The answer is all the harder to understand because there’s a
long article about the Trenors in Wikipedia! True ChatGPT adds, “I cannot access up-to-date information on
specific families or individuals without violating their privacy;” but I didn’t
specify “up to date.”
Undaunted, I decided to try a second experiment, this time
on a topic that should be more familiar to regular readers of this blog and so
make it easier for them to judge the result.. The question was, “Who first
proposed the natural translation hypothesis.” This time I did get a meaningful
response – only it was a wrong one. It was that the inventor was Eugene Nida.
Nida was an immensely influential translatologist; there was a time, when I was
beginning in translation studies, that no thesis was complete without a reference to him. And it’s true
that Nida recommended the use of natural-sounding target language. However, he
did not formulate a natural translation hypothesis and he used natural in quite
a different sense. Why then did ChatGPT make this mistake? We can only
speculate, because chatbots are black boxes whose workings are not revealed by
their designers if they even understand them themselves. It could not be for
lack of data, since a cursory search with Google finds lots of references to
the hypothesis. So a more likely explanation is failure to apply the longest match
principle. The principle states that when seeking to match strings of
characters or words, only the longest match is acceptable. Could it be that
ChatGBT mistakenly stopped seeking after translation and missed hypothesis?
The next thing that happened was my
mistake. I pressed a wrong keyboard key and as a result the previous question
was repeated. Again I soon got an answer – but it was not the same one! It was
A single example isn’t enough to
declare one chatbot superior to another. However, if there’s a lesson we can learn
from it it's not to rely on only one chatbot. If there is the slightest
doubt, try another.