Experiences from Using AI as a Software Architect

Artificial intelligence excels at refining language and processing large text volumes, but lacks human-like contextual reasoning and emotional intelligence, Avraham Poupko said. Many human traits come into play when doing software architecture. As an architect, he suggests using AI for exploring tradeoffs and refining language with clarity and precision.

Avraham Poupko shared his experiences from using artificial intelligence in his daily work as a software architect at OOP conference.

Software architects and system architects will not be replaced anytime soon by generative artificial intelligence (AI) or large language models (LLMs), Poupko explained in How a Software Architect Uses Artificial Intelligence in His Daily Work. They will be replaced by software architects who know how to leverage generative AI and LLMs, and just as importantly, know how NOT to use generative AI.

AI is helpful when exploring tradeoffs, or when trying to find the right way to say something with precise language as in using the correct word and sentence structure, Poupko mentioned. As a language model, AI is very helpful and provides a great deal of clarity:

As an experiment, I provided Gemini with the above paragraph and asked it to suggest an alternate wording. The result is:


Gemini>>”While AI cannot yet replace human architects due to the deep system, domain, and organizational knowledge required for effective design, it’s a valuable tool for exploring tradeoffs and refining language, providing clarity and precision in documentation.”

While the paragraph provided by Gemini might be more concise, it still needs some work. For example, notice the word “yet” that Gemini added to the sentence. “I do not think it belongs there at all,” Poupko said. If one can be offended by an LLM, he is offended by that, he added.

As humans, we are very good at “Contextual Reasoning”. That means that we are good at taking a great deal of context as well as previous experiences and considering optimal behaviour, Poupko explained:

We know how to read a situation, body language and gauge appropriate behaviour. That is an area where AI is currently very poor at.

AI is most useful in tasks that involve written language. When it comes to reading a large amount of text and using the knowledge encoded in those texts, we might find that AIs are better than humans, Poupko said.

As an architect, he has learned that so many of his human traits come into play when he does software architecture, Poupko explained:

I learned about the importance of my ability to communicate. I learned about the importance of my Theory of Mind, meaning my ability to create a mental representation of other people’s mental model.

AI has made it clear to me that true “understanding” goes beyond understanding texts and technology. It is about understanding context and people, Poupko concluded.

InfoQ interviewed Avraham Poupko about using AI as a software architect.

InfoQ: How can we leverage our human strengths in a world enhanced by AI?

Avraham Poupko: Where AI is good at encoding knowledge from a large amount, humans are better at contextual reasoning.


For example, I provided the following prompt to Gemini:


Avraham>> I am building a web based AI agent. That knows how to take a problem or challenge from the user and use AI to ask clarifying questions, and then use AI to respond. Can you provide some system requirements for me to get started with?


The answer I got from Gemini was useful, and it can be seen that it collected a great deal of knowledge from a great deal of sources:


Gemini>>


Choose a simple framework: Start with a lightweight frontend framework like React or Vue.js, and a backend framework like Flask.


Select pre-trained models: Utilize pre-trained models for NLP tasks from Hugging Face Transformers or other libraries.


Build a basic API: Create a simple API endpoint to handle user input and return basic responses.


Develop a basic UI: Create a basic web page with input and output fields.


Iterate and improve: Gradually add more features, refine the user interface, and improve the AI capabilities over time.


These are all good and helpful answers that are certainly appropriate.


When I asked the same question to a good friend who knows me well, she answered quite differently:


Friend>>Avraham, you know, that is an interesting idea that touches on the relationship between humans and LLMs. I think a good place for you to start would be to clarify to yourself exactly what will your "agent" do and what the LLM will do. I think you should start on your local PC. Don’t do any development. Spend a lot of time investigating how you can get the LLMs to ask provoking questions that will clarify your own thinking. If you can do some of that before OOP in Munich that would be cool. You can present findings in the talk, and maybe get some good feedback or partnerships. By the way - is the word "agent" the right word here? Maybe use something cool like "pre-thinker" or something.


Between my friend’s answer and the AI’s answer, I was able to have a really interesting conversation about the practical and theoretical aspects of using LLMs and what the role of an agent might be in the interaction.

InfoQ: What have you learned about generative AI’s abilities and limitations?

Poupko: I learned that AI is constantly able to surprise me. That AI is sometimes astonishingly good at gathering and articulating known information and applying it to a new situation.


However, conversing with AI is not the same as conversing with a human. AI is not curious about the world, and will never be my “friend” in the sense that we can share a common sense of value or of purpose.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top