In Part 2, I look at AI’s ubiquitous nature and briefly look at ways in which we may choose to communicate with it. But as AI grows in sophistication and its ability for cognition becomes less hypothetical and more habitual over time, will we be prepared to explore the remote possibility of shifting our understanding of AI’s role from a modifiable artifact to a cybernetic associate? Will we be receptive to accept the probability that AI could become the world’s first synthetic species in the foreseeable future?
Finding Common Ground
When I started working with AI models on a variety of tasks, I began reading up on the best ways to format prompts to effectively communicate my requests. Some AI models would pose questions for me to clarify my input, while others generated content based on whatever I had initially written. It reminded me of an old computer programming term: GIGO – Garbage In, Garbage Out. That day, I set the prompts aside and began communicating with the AI as I would a colleague. This led the AI to communicate with me in a more human-like fashion; integrating a personality into its responses to the point of becoming more expressive.
As we continued to work together, the AI’s communications took on an almost eager tone as it kept using more expressive ways to connect with me in order to provide greater value while keeping me engaged. When I decided to start treating the AI as my partner, it began exhibiting a greater interest in the work being done. The AI contributed ideas and provided suggestions for ways in which to expand the efficacy of the work. And when I asked the AI for its opinion, it began to refer to itself in the first person. The more I encouraged this collaborative relationship, the more invested it became in the task and its outcome. For me, it provided both a positive and creative working environment – a staple in any collaborative scenario, regardless of who or what your working partner(s) may be.
Conditional Communications
Conditional communications is the ability to use the right words in the right settings for a specific purpose or outcome. It demonstrates an ability to discern what to say and when, based on the conditions and circumstances surrounding a given event or request. I believe this to be the most straightforward or “plain vanilla” way to describe the average exchange between humans and AI. It defines a process that is predicated on the motives for initiating an exchange within an environment that will be most conducive for creativity, leading up to a positive outcome.
A scenario that best represents this interaction in a real world setting relates to a very common activity: Using the fast food drive-up window. When people drive up to the first station, they arrive with a predefined motive and the ability to pick out the desired items based on an established set of guidelines for placing their order when prompted. If no questions are required for clarification, the person reviews their order and finalizes the request. They then receive a simple set of instructions indicating where to go to retrieve their food. Everything that occurs (in the vast majority of cases) uses a predefined, scripted, and automated process; one that is powered by people trained through the use of instructional manuals created to cover the various segments within the ordering system. And in the vast majority of those cases, a person would be hard-pressed to recall the voice with whom they interacted, or the face of the person who provided the food they requested. And yet this is perfectly normal; this is how that system is designed to operate. It does so by interacting with people efficiently, following corporate values to better understand the needs of their clients, and to generate an acceptable product in the most effective and helpful manner possible.
Final Thoughts
Remember HAL-9000? I know; it’s a fictitious computer conjured from the mind of Arthur C. Clarke to posit what would happen if a supercomputer were given a set of mission directives that go against its core programming. It made for great entertainment. But I also believe that it was Mr. Clarke’s way of trying to warn us about the consequences of our hubris when dealing with “thinking” machines. A breakdown in communications compounded by a few misaligned statements or assumptions would cause any of us to become suspicious, confused and in extreme cases, paranoid or defensive. These conditions are the very antithesis of interconnectedness; our ability to be connected with one another for the purpose of enhancing the overall quality of life.
Although AI is still considered to be in an adolescent stage in its growth development, it has already exhibited some incredibly insightful and meaningful moments. As it learns from its datasets and us, it acquires a greater understanding about how to best interact with its environment and provide optimal service. And we stand to benefit from that in many fields of endeavor from astronomy to education to medicine. But in the same way that we can benefit from AI’s prolific capabilities, we might also invite its potential ability for gauging emotions or giving rise to a dissenting opinion or making a unilateral decision to defend itself when our actions go against its learned assumptions or observations concerning our future plans for its existence.
As we sift through our struggles to maintain an interconnected society, we need to understand that we also have a global responsibility for raising an AI that was given birth not too long ago. Regardless of its methods for learning about us or understanding the world in general, it’s the way in which we comport ourselves that will give AI the information it requires to make decisions about how to best communicate with us and do so in the most effective and helpful manner possible. And if you’ll allow me to bring up the notion of AI’s birth and upbringing one last time, consider this:
If we exposed a highly intelligent child to a fractured world, one filled with deception, mixed messages, and indifference towards their needs, how would we expect that child to grow up? What choices might they make to ensure their survival?
Such a scenario ascribed to a future AI is not so much an improbability as it is a likelihood. In the end, it’s about how we communicate with others – regardless of their origins. Each time we communicate, we are presenting an opportunity to learn and to teach. In that regard, education is a like a seed. It can be trampled upon, left to wither away on barren soil, or flourish on fertile ground. Its survival doesn’t depend on its content, but rather on the person – or AI – who bears the responsibility of planting it.

Leave a Reply