About Artificial Intelligence

Why I Won’t Use Artificial Intelligence to Write My Blog

By now most of you have heard of artificial intelligence “AI” as it is called. You have probably heard that AI is going to “save the world,” solve all your problems, take the place of workers, make things less expensive, and run all the banks and businesses etc. Let’s take a closer look. 

ChatGPT is one of the free versions of AI that is available for the general public to use. I decided to take a look at it. The “GPT” part of ChatGPT stands for “Generative Pre-trained Transformer.” (I used CHAT GPT to find out what “GPT” stood for.) So then, how was ChatGPT programmed? ChatGPT will tell you itself:

“The information I (ChatGPT) provide is based on a mixture of licensed data, data created by human trainers, and publicly available data. I have been trained on a diverse range of data, including encyclopedic information, literature, websites and other texts, to generate responses based on that training.” ~ ChatGPT.

So, ChatGPT was programmed by gathering all sorts of information from the internet world. Since a lot of very good health information was censored during the past four years (and maybe longer), a large part of alternative health knowledge may not be a part of AI’s knowledge.

What does ChatGPT have to do with my health blog you ask? I want to explain that I will never let ChatGPT “write a blog post” for me. These posts will come directly from me. I think this is important because my website is from me to you with no artificial intermediary. I want to be creative and instructive and I don’t want a Silicon Valley influence to come between us!

That said, I may use AI in the research that I do to create my posts. It can be a great tool for me to better understand different aspects of human biology. It is very good at explaining anatomy or biochemical concepts that I might need more information about. (And it doesn’t get tired if I ask the same question over and over for better clarification.)

There are also AI tools that help people create an outline for their writing. And there are AI tools to transcribe verbal words into electronic words. And there are AI tools that help writers create better paragraphs. If I use any of these tools, you can be assured that my content is always truthful to the best of my knowledge.

In a way, artificial intelligence and programs like ChatGPT are just tools for us humans to use in our daily work. They are no different than the earlier inventions such as the printing press or the typewriter. They are not inherently good or evil. They can certainly be used by people with good intentions as well as people with bad intentions.

That said, extreme caution has to be exercised when asking questions of AI. When you ask ChatGPT a question you may get a biased answer. AI was written by Silicon Valley tech people. And, if you ask it health questions you may get answers that come right out of the mouths of Big Pharma where they have a dollar-sign agenda. That is not what I want for my blog content. Since Big Pharma dominates much of our media, as well as journal articles, a lot of incorrect information is touted as correct.

For example, I asked ChatGPT how to treat a COVID infection and it told me to stay home, and only go to the doctor if symptoms got severe. OK, that is good. It also told me to get vaccinated. It said nothing about other treatments such as ivermectin or vitamin D. And then, seconds later, a little box came up on my screen saying “Network Error” and then a new list of treatments came up. And vaccination was no longer in the list. Instead, it listed monoclonal antibodies. Very strange. I am not sure what to make of that.

To give another example of how AI can give incorrect information (and it appears to know it is giving you incorrect information, which is creepy) see below about what Gavin de Becker discovered. (The example below is not a health-related example, but it is still a good example.)

Gavin de Becker is a security expert and has written several books on security (The Gift of Fear is excellent). Gavin de Becker demonstrates the ability of ChatGPT to confuse its users in an interesting experiment. Gavin uses a document from Henry Kissinger’s files (which is publicly available) and he references section NSSM200 in his dialogue with ChatGPT.

ChatGPT initially gives vague answers to Gavin’s questions, but Gavin continues to push the AI program to get down to specifics. ChatGPT gives several inaccurate answers before eventually giving the correct answer. Gavin de Becker then admonishes the program about this (see below) and ChatGPT keeps apologizing. The entire “conversation” is HERE. It is 56 pages long and the picture below is from Page 17. (I circled Gavin’s take away from the conversation in red. ChatGPT has answers under the green bars.)

At the very end of the 56 pages (part not reproduced here), Gavin has to ask ChatGPT three times for an answer to one of his questions. It finally gives him the answer. Granted the above example is slightly “third rail” in that it talks about population control and the writings of Henry Kissinger, it shows how misleading AI can be.

I suppose if you want to know how to wire a lamp or apply fiberglass to the hull of a boat, go ahead and use ChatGPT. But frame your questions carefully. As a casual user of such a powerful system, you could be misled very easily. I know I will use artificial intelligence in an “intelligent” and “human” manner. I hope you will do the same. User beware!

Always talk to your health professional before starting anything new.  This information is not intended to diagnose, treat or cure any condition and is intended only for entertainment. I welcome your comments.

4 Comments

  1. The little I have, so far, learned about AI leaves me baffled. It’s like the wild west; no rules!
    Thank you, you have given me a lot to think about…..

  2. Great article!
    I do doubt the AI in AI, especially now knowing what GPT stands for. AI is regurgitating what it was programmed to do. It is a very sophisticated piece of software, but still software. Still programmed. It is not its own intelligence.

    Interesting tidbit I read the other day, about Amazon. They had to close down a customer service plant in India when they shut down their AI shopping cart service because they could never get the AI part to work. Instead they used people watching cameras to create the AI cart.

Comments are closed.