Artificial Intelligence – FAQs

In recent months, Artificial Intelligence (AI) has been in the news a lot. Some of you may already be experienced in using the various types of AI software currently available. But some of you may have questions about AI and/or would like to learn more.

Earlier this week, the Wall St. Journal published a standalone section titled “Artificial Intelligence.” The lead article was “What Readers Want to Know About Artificial Intelligence.” We thought it was a good primer about AI, and wanted to share with you in case you haven’t seen it.

As publishers, we care about AI because our basic product is intellectual property, and AI developers have been “training” their software by scouring the web for everything that appears, whether words or images, and appropriating it for free regardless of whether the material is protected by copyright or trademark. The New York Times recently sued OpenAI and Microsoft for copyright infringement, claiming that millions of articles published by The Times were used to train AI-powered chatbots that now compete with the news outlet as a source of reliable information. This is only one of many lawsuits that have been filed by copyright owners against AI developers in recent months, including suits by the Authors Guild, several bestselling authors, and Getty Images. At the beginning of this year, TechTarget published a roundup of then-pending litigation against AI developers.

Here are the questions which this week’s WSJ article answers:

1. I really do not understand artificial intelligence and where it’s heading. Please explain what it is.
2. What about those “chatbot” AI systems?
3. How can these new AIs answer almost any type of question?
4. What are the implications of this for average citizens?
5. What about “hallucinations,” when an AI makes something up but presents it as a fact?
6. What are AI providers doing to minimize hallucinations?
7. Do chatbots warn people about the possibility of giving bad information?
8. What about people using AI to created phony news reports, photos or videos? Can it be stopped?
9. Wouldn’t it be better if we could see the sources AIs use in their responses, like footnotes or citations in a report?
10. What other measures should be considered to help us discern the quality of the information AI produces? Would some type of rating system work?
11. Executives from Google, Microsoft, OpenAI and other experts warned publicly in 2023 about the dangers of AI. What do they fear?
12. In March 2023, many in the AI industry signed a statement calling for a pause in moving ahead with ever more powerful versions. Did that pause happen?
13. Some say governments need to regulate AI to make it as safe as possible. What might that look like?
14. I’ve heard a lot of doomsday speculation about AI, but what are some of the ways it is doing good?

To see the WSJ’s answers to these questions, read the entire article online, or in pdf….

 

 

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *