LLMs and AI: The Next Step in Conversational Programming

Published On Sun May 14 2023
LLMs and AI: The Next Step in Conversational Programming

Conversational programming is an emerging concept that seeks to democratize computing. Large Language Models (LLMs) like OpenAI’s GPT have been instrumental in its development, and their release has generated excitement among tech enthusiasts. Conversational programming moves away from the specialization and planning associated with classic software use towards informality and reactive steps. The idea is to make computing more human-like and to enable users to create software with greater ease.

Conversations in real life always start with a shared context, and LLMs need this context for the same reasons. Most of the new innovations will come within existing apps, and their users already share a context. This context helps to pare down the limitless possibilities of language.

Another important consideration is that two people who explain the same thing in different ways must get the same results for a system to be trusted. This is important for LLMs because they have a habit of answering in different ways to similar input. As such, if they have access to a set of commonly asked requests, they can better reason about the likely meaning of similar requests.

There must also be an agreement on the form of the unit of work, outcome, or progress. LLMs can change the described form of an object to suit fixed specifications as long as conversations exist. Apps and organizations that specialize in specific model domains will receive pressure to make these areas available for autonomous requests. However, corporate training should not inculcate a brand or product as a base type.

LLMs are different from any other goal-orientated system because they can analyze their own reasoning and criticize the outcome. They launch background tasks that return with the required information. AutoGPT projects are trying to use APIs to connect to other LLMs and act as task management agents.

Conversational programming requires that the scope of a conversation mirrors a human “mental stack” rather than that of a computer. The LLM system has to work with limited human cognition facilities, creating things in response to requests, and reporting outcomes at the same level as asked for. Returning arcane error codes in response to requests will immediately break the conversation, and the system must retain value for the user.

The conversation should remove the little pitfalls that make technical tasks less straightforward while allowing the average employee to gain the confidence to create resources. Conversational programming seeks to make computing more accessible to the average user and is the future of computing.