Why AI is failing at giving good advice
When you ask ChatGPT a question, something highly interesting happens: ChatGPT, which has previously consumed half or more of the internet to build its language model, will translate your question into a mathematical representation of numbers (e.g., a vector). I don't know in detail how they do it, and I am sure there are some layers in between and around that serve some specific purpose, but I understand that if you google the phrase "How are you?", you can statistically expect a certain range of words and sentences in the results around it. Most sentences following the question will probably sound like "I'm good, thanks" or "Doing great, how about you?".

With so much base data, you can assign a mathematical value (or direction) to every word, change it when it appears together with other words (context), and compute an entire, unique direction for a continuous piece of text.
The Issue with AI-Driven Advice
Realize the following (exemplified): Anyone who ever had success on the internet writing articles (or anything else) in the broader sense of the universe just pressed a particular combination of buttons on their keyboard, and then more biological masses in the world started reading the outcome than other produced texts.

In a strange but very scientific way, when you ask ChatGPT a question, it tries to compute the exact combination of letters, words, and sentences based on their previously computed values that it thinks you are looking for. Astonishingly enough, that is often a highly useful response in the real world.
But this approach has problems, especially when you try to give someone good, specific advice. The outcome is, by definition, mathematical. It's probability, applied to man-made text. The most propagated (related) text on the internet will likely be repurposed in its own words to answer anything that you ask. Essentially, that means it might give you a mashed answer as you would get from the X first results on Google, but it will fill in contextual gaps from other places and make it more applicable to your specific input.
If most internet texts said the sky was yellow, ChatGPT would say so, too. Similarly, suppose you ask ChatGPT the infamous question, "How can I make money online quickly?". In that case, you will get a shallow, unhelpful response (that will often stay unhelpful even if you drill down into specifics).
The Limitations of AI in Providing Valuable Advice
This is not to say that everything is particularly "wrong" (although some points are, according to most people's experience); it is just paraphrasing those online bubbles of drop shippers, BuzzFeed listicles, and affiliate boards. For example, almost everyone who has succeeded with YouTube or affiliate marketing will tell you neither is quick. It takes years of work, dedication, and a fair pinch of scientific user analysis.
In a strange but very scientific way, when you ask ChatGPT a question, it tries to compute the exact combination of letters, words, and sentences based on their previously computed values that it thinks you are looking for. Astonishingly enough, that is often a highly useful response in the real world.
The Role of AI in providing Information and Advice
A Large Language Model can provide accurate answers if fed the correct base context (superseding the general knowledge base) and if you ask the right questions. But even then, you need to find that respective chatbot and the questions you must ask to get helpful answers (although the latter may apply to many human conversations, too).

At the current state of the internet, there is almost any educational information and advice already out there in some form, freely accessible to everybody, more than anyone could ever take action on in their lifetime. Today, the value of providing information is about more than just delivering it; it's about delivering the right information to the right people the right way. And LLMs fail at the former.
The bottom line is that AI is not yet capable of what a good teacher or mentor can do: giving actually good, uniquely applicable, empathizing advice. It's much better at explaining things.
PS.: This article was peer-reviewed and approved by ChatGPT. I ignored its suggestion to add examples where it gave helpful advice because statistically, with enough advice given, you will just randomly run into occasions where it gave good advice.




















