< Back to Insights

Sample Announcement

Relevance of ChatGPT results for co-creation and innovation

We are at the dawn of a new era with Generative AI yielding applications like ChatGPT, Mid-Journey, and many more to follow. The potential impact is immense but there is also significant danger. Public discourse, already reeling from the political impact of fake news, now faces a new challenge: Do large language models create an even more significant threat to the amplification of misinformation threatening all our institutions? How do we know what is true?

As a pioneer in AI¹, and an early adopter of pre-trained transformer models², I will share a path forward that is currently delivering practical applications of integrated human collective intelligence and artificial intelligence agents.

Text presented as knowledge requires vetting to be trusted. Large language models promise massive value by putting vast collections of knowledge at our fingertips. This article makes the case that to trust and apply it, we need an additional form of AI based on human curation and contextualization. Let’s ask ChatGPT.

Question to ChatGPT: Do Large Language models provide a way to validate the source of knowledge?

ChatGPT: Large language models by themselves do not provide a way to validate the source of knowledge. These models are trained on vast amounts of text data and can generate text that is similar to the text they were trained on, but they do not have the ability to verify the accuracy or source of the information they provide. It’s important for people to use critical thinking skills and verify information from multiple sources in order to determine its accuracy and credibility.

Question to ChatGPT: Is there an AI technology that helps humans apply critical thinking to large language model results?

ChatGPT: There is currently no AI technology that specifically helps humans apply critical thinking to large language model results.

While large language models create excitement about their generative potential, we must be extremely cautious about the value of what they generate. Data lineage is foundational to trust in information systems. These models generate from fragments of text without reference to data lineage. Anecdotes are exciting. That is not the same as data provenance.²

“Large language models drop the provenance of fragments used to generate outputs. By definition, it cannot produce scientific knowledge.”
John Seely Brown³

In the remainder of this paper, we will explore how an AI facilitator integrates knowledge models of humans, and intelligent agents (in this case ChatGPT). We will demonstrate a way to productively engage ChatGPT in collaborative co-creation. Finally, we will explore how curation with collective human intelligence creates trusted, contextualized AI-assisted co-creation.

The collective intelligence of humans and machines

In parallel to the development of large language models, a third wave of AI focuses on integrating the collective intelligence of humans and machines with roots to how humans build trusted knowledge: the scientific method.

Collective intelligence is believed to underly the remarkable success of human society.⁴

Engaging human collective intelligence is fundamental to building trust in knowledge. The Enlightenment ​​established the scientific method and logic as the process for knowledge aggregation and defending human rights against tyranny. It underpins critical thinking principles, including validating evidence, logical reasoning, and deliberation. In the Structure of Scientific Revolutions, Thomas Kuhn states: “Revolutions should be described not in terms of group experience but in terms of the varied experiences of individual group members. Indeed, that variety itself turns out to play an essential role in the evolution of scientific knowledge.”⁵ Debate, deliberation, and discussions guide the adoption of scientific knowledge.

Constructive deliberations are based on evidence and cite the reason for a prediction or belief about an outcome. In Thomas Bayes’s essay on the “Doctrine of Chances,” he shows how evidence strengthens or challenges our confidence in what we know. If new evidence affirms our beliefs about an outcome, our confidence in knowledge increases. We challenge the evidence or change our thoughts if it challenges our assumptions.

The first generation of AI captured the experience and evidence of expert knowledge in hand-crafted systems (expert systems).⁶ The technologies used, logic and frame-based knowledge representations systems, explained how the system came to an answer or prediction. Explanations revealed the facts and assumptions underlying reasoning. Truth maintenance was a thing! When assumptions or underlying facts changed, the system logic reflected that in explanations. Systems could create multiple world views allowing exploration of multiple reasoning paths. Model-based reasoning demonstrated it was possible to build effective systems in generating alternative outcomes based on alternative models.⁷ Model-based thinking is important in linking the science of collective intelligence to next-generation AI.⁸ Scott Page’s book “The Model Thinker” explores the power of multiple-model thinking in prediction accuracy and decision-making.⁹

Current AI technologies learn models by correlating patterns learned from historical data. BlackBox AI, often called, learns statistical patterns from data and does not lend itself to explanation. Blackbox AI delivers answers without explanation, as ChatGPT so eloquently explained in the opening paragraphs of this article.¹⁰

Contact Us

Please fill out the form below. Your information will be directed to the appropriate person, who will get back to you soon.