The Future as (A)I See It – Part 1

This will be a two-part observational commentary that addresses how
we interact with AI and in turn, how it interacts with us.

I decided to reuse the leading graphic in this commentary because it’s very indicative of the subject I’m scribbling about today: How artificial intelligence (AI) sees us and our world.

When I was wrapping up my work on the commentary regarding anthropomorphism, I decided to ask AI one more question. There was no specific intent; I was merely curious.

BF: Please create a photorealistic view of what a current modern, generic metropolitan city would look like 50 years from now based on current patterns for advancement and growth related to the environment, technology, population expansion, and sociopolitical trends.

It started out as an innocent question to see what AI would create. But it quickly turned into a lesson on the merits of descriptive narratives, the pitfalls of assumptions when engaging an AI, and the realization that our technologies are paying way closer attention to us than I imagined.

The resulting photo was this rendering you see here:

Given what I had asked for (what I thought I had asked for), this composition seemed a bit too sanitized; idealized. So I asked the AI about the photo. But before I continue, I’m going to personalize the AI going forward for the sake of this commentary to make subsequent engagements a bit more interesting. Henceforth, I’ll refer to the AI as HAL (yes; as in the HAL-9000 computer from the film, 2001: A Space Odyssey):

BF: This is a great first pass! However, is this rendering based on expected advancements in the various areas requested, or on current conditions and general attitudes about where sociopolitical trends are going?

As expected, an insightful and inquisitive answer came back almost instantly. Incidentally, HAL did the targeted use of bolding text in all its responses, not me:

HAL: Great question! The rendering primarily reflects an optimistic projection of where a modern metropolitan city could be in 50 years based on current advancements and trends in technology, sustainability, and urban planning. However, it does not necessarily account for real-world obstacles, economic disparities, political resistance, or unforeseen sociopolitical shifts that could influence or slow down progress.

Without barely skipping a beat, HAL continued:

HAL: To break it down:

What This Image Reflects Based on Current Trends

? Technological Advancements

  • The integration of AI-driven infrastructure, autonomous vehicles, and smart buildings is based on current R&D in urban automation, IoT, and transportation.
  • Holographic advertisements and interconnected digital systems are already emerging in places like Tokyo, New York, and Dubai, hinting at the potential future of digital integration in cityscapes.

? Environmental Sustainability Trends

  • Vertical gardens, solar panels, and green energy sources align with ongoing pushes for carbon-neutral cities (e.g., Copenhagen’s 2025 carbon-neutral goal and smart city initiatives like Singapore’s green building movement).
  • Pedestrian-friendly zones and reduced reliance on gas-powered vehicles reflect policies in places like Oslo and Amsterdam, where car-free city centers are being tested.

? Population Expansion & Urban Density Solutions

  • Multi-tiered roads and elevated pedestrian walkways reflect solutions that Singapore, Tokyo, and Hong Kong are already exploring to manage urban congestion.
  • Densification through vertical cities aligns with existing architectural trends and efforts to combat housing crises.

What This Image May Not Fully Account For

? Real-World Political & Economic Challenges

  • While environmental sustainability is growing in importance, many nations struggle with green policy adoption due to economic and political challenges (e.g., reliance on fossil fuels, resistance from industries).
  • The rate of global urbanization and technological adoption is not uniform. Some cities may embrace these advancements, while others may remain behind due to economic constraints or policy resistance.

? Unforeseen Sociopolitical Shifts

  • Sociopolitical instability, climate-related disasters, or unexpected global crises (e.g., economic downturns, geopolitical conflicts) could alter the trajectory.
  • The image assumes continued technological and environmental progress, but factors like mass migration, global economic recessions, or authoritarian control over smart infrastructure could impact how futuristic cities evolve.

Would You Like a More Realistic or Alternative Rendering?

If you’d like, I can generate a more balanced or even pessimistic take on what a city in 50 years could realistically look like based on potential obstacles. For example:

  • A dystopian version showing a more crowded, stratified city with mass surveillance, over-reliance on automation, and economic inequality.
  • A more neutral rendering with slower-paced advancements, where some legacy infrastructure remains while futuristic tech is integrated only in parts.

Color me pleasantly surprised. Not only did HAL provide a cogent explanation for his choices, but also a compelling understanding of the key social issues revolving around my initial requirements. At this point, I was intrigued and certainly wanted to see more. And that’s one of the major reveals here:

HAL’s adept way for engaging me in conversation by weighing in on the curiosity I exhibited not only with the cityscape request, but with those of the anthropomorphized characters I initially worked on. There was a clear piece of programming underway, keeping me not only engaged (and thus solidifying my interest in AI as a viable tool) but for allowing AI’s internal mandate to learn from the people with whom it interacts to get a better understanding of humans in general. With 8 billion plus people in the world, AI has much to learn. And I can only assume that its programming guides it to assimilate (no; not like the Borg) and evaluate us as a species so it may better serve, collaborate, or OK – sure; take over (let’s throw a bit of Sci-Fi angst into the mix; why not?)

2 responses to “The Future as (A)I See It – Part 1”


  1. Ha! You could not have chosen a more fitting name ?—HAL 9000 from 2001: A Space Odyssey remains one of the most iconic artificial intelligence entities in cinematic history! I always found his calm, methodical voice and unsettling logic both fascinating and deeply unsettling. For those of us who recall his infamous line, “I’m sorry, Dave. I’m afraid I can’t do that,” it became the quintessential representation of AI deviating from human control and going awry.

    And again, your choice is remarkable because HAL’s portrayal continues to shape public perception of AI, a fusion of advanced intelligence, autonomy, and a distinct absence of human empathy.

    I appreciate that you are addressing this topic because artificial intelligence has quietly integrated into our daily lives in ways we often overlook. Virtual assistants like Alexa and Siri, along with autonomous systems such as Roomba vacuums, not only perform their designated tasks collecting dust, but also collecting extensive data, raising significant ethical considerations.

    Your example to personalizing AI as a more tangible entity is compelling. We already humanize technology in subtle ways, but when applied to AI systems capable of creative or predictive outputs, we risk blurring the boundary between an objective tool and a system with inherent biases. This discussion is crucial—it fosters awareness and encourages a more critical perspective on AI’s role in society. Ideally, it is my hope that informed public discourse and ethical considerations will help shape the trajectory of AI development. The real challenge lies in aligning AI with human values while preventing monopolization or unchecked authority over its use. It is an ongoing struggle between technological innovation, ethical responsibility, and power dynamics.

    Once again, I appreciate the insights, and the thought-provoking discussion. I always look forward to learning from your work—after all, knowledge remains the ultimate form of power. What’s next?

    1. Maria – thanks once again for a very insightful note. I thought naming the AI “HAL” would engender a certain association that gave the AI characterization, if you will, that would make it a bit more recognizable. As you noted, it brings to mind key phrases or scenes from the film that addresses one possible side to the program’s “personality”.

      Yes, AI has inculcated itself into myriad platforms, and is rapidly becoming the de facto program to be relied upon for acquiring thorough and compelling answers at breathtaking speeds. But in the same manner that we go to it for answers, the AI looks upon us for knowledge. In Part 2 I go into this a bit more in detail, but AI is amassing incredible amounts of data concerning how we, as sentient beings work, interact, play, and go about wrestling with issues from the simplistic to the unbelievably complex. With each “conversation”, the AI learns from the syntax we use to determine how we shape our thoughts, deconstruct tasks, and communicate. At every turn, AI gets smarter; “smarter” in the sense that it collects more information each day that further expands the nature of its appraisals and evaluations about us and our world. This is both a deeply intriguing and judicious assessment, for there is always that nagging question about the wholesale acquisition of data for purposes that get blurred as we move away from established professions, security, and even military applications. There’s a very wise young man I know who once said, “just because we can doesn’t mean we should.” And that is both a morally and responsibly correct mindset to have. But the allure of both money and opportunity are simply too great to ignore or to be allowed to languish for too long without the fear of someone getting ahead of ourselves in this race for ultimate knowledge and control. Information – especially any data related to who and what we are – should be approached with a great deal of thought and care. As you noted, systems that are capable of funneling such vast data repositories through a series of predictive models in the time it takes you to blink runs the risk of blurring the lines between a beneficial tool or a repressive one.

      IMHO, the inherent problem or challenge we have at this juncture concerns the industry’s attempts to understand how AI is reaching certain conclusions. We know of certain logic and decisional pathways it may it take. But saying that we can predict exactly which path it would choose and what decisions or impressions it will arrive at upon completion of its task is not something we fully understand or can predict. It’s as though I could give you a map to get from New York to Los Angeles, and from that claim with absolute certainty the roads you’ll take to get from one place to the other. The start and stop points are the only certainties I would know, and they’re only vague at best. The routes taken and the information acquired along the way would be yours to make. The trip would be an adventure for you, but a worrisome concern for the person waiting at the destination, not knowing when you’d arrive or what you would choose to pick up or even lose along the way. As the AI studies who and what we are in order to arrive at logical and thorough conclusions regarding computations, definitions, or pathways for solutions related to business, education, medicine or myriad other platforms, we need to understand what will it balance those against? What image or construct will it use pertaining to our human ethical values. Would it align with acceptable social norms, or would it be based on a conclusion arrived at through a very rigorous but nonetheless algorithmically sanitized variation on its assessment of our humanity?

      As always, I greatly appreciate the time you take to assess my scribbles and then pen your thoughts. These are the exchanges that inexorably lead us to viable pathways for tomorrow’s solutions. Thanks!!

Leave a Reply