The LLM revolution requires us to discard the old paradigm of human-computer interaction

Large language models (LLMs) like ChatGPT require us to discard the old paradigm of human-computer interaction.

In “traditional computing,” if you want a computer to do something, you needed to input precise instructions. The computer translates those instructions down through multiple layers of abstraction, until they end up as 1s and 0s running at a gazillion tiny computations per second. The resulting 1s and 0s make their way back up through the layers of abstraction and change what your computer displays, or what sound you hear, or the buzz you feel from your phone or watch.

These layers are purposefully designed and everything happens in a deterministic way: the same inputs should generate the same outputs, every time. That’s why you can google “Excel MacOS swap x-axis and y-axis” and the resulting pages are full of precise instructions.

LLMs change that. Instead of giving the computer perfect instructions with perfect data, chat-based LLMs can work with imprecise inputs to generate a range of expected outputs. We can think of LLMs as adding a layer of non-deterministic abstraction between we humans and the hardware that sits in front of us. And that additional layer of abstraction will fundamentally change how we interact with computers.

Two small tasks on a “mini-project” illustrate this point.

* * *

A recent article from Forbes and Security Scorecard highlighted “America’s Most Cybersecure Companies” — a list of the 200 companies with supposedly the best cybersecurity. The list was curated using SecurityScorecard’s proprietary cybersecurity data.

The list as it appears on Forbes’s website

Suppose that a cyber insurance carrier is interested in the predictive value of Security Scorecard’s rankings. There are two potential ways to analyze the data: (1) We could identify which of these companies we have in our portfolio, and compare their frequency of claims against all other companies in our portfolio; and (2) We could monitor public breach reports to see if these companies have a greater frequency of claims compared against other companies.

But before we can do any of that, we need to take the first step: scrape the list from Forbes’s website.

Scraping data from the web

Unfortunately, the list can’t be easily scraped, and Forbes doesn’t make a spreadsheet available for free. Selecting, copying, and pasting the list into a spreadsheet doesn’t work — every “cell” of data appears on a new row. Behind the scenes, the data is arranged in a series of nested <div> tags rather than a more traditional and — easily scrapable — <table> layout.

The nerd voice in my head said, There are many web-scraping tools written in Python. I could get ChatGPT to write a script in Python to scrape this data! I went to YouTube and found a few examples where ChatGPT had been asked to write a script using Python and an HTML-parsing library called Beautiful Soup.

I used the examples in the videos I saw and guess what: I failed. ChatGPT failed. We failed together. The examples in the videos were based on scraping data from websites that were designed to be easy to scrape. ChatGPT was able to provide the template of a script, but because the table I needed to copy was no table at all and actually a series of nested <divs>, the script wouldn’t work.[1]

I was still thinking in the traditional computing paradigm: computers are precision machines that need clean, properly structured data to work with. It’s not like I could copy the data from the website, paste that resulting mess into a ChatGPT prompt and say, “Hey can you format this messy data as a CSV?” Or could I?

That was exactly what I did. And it worked!

Input to ChatGPT
Output from ChatGPT

This same task could have been completed by a human, meticulously copying and pasting data into a spreadsheet, or manually formatting a CSV. Click, delete, comma, click, delete, delete, comma, click, delete, delete… But the point is that ChatGPT did the same thing 50x faster and with the same level of context and instruction that I would give to another human to complete this task.

Ok, so what? It’s faster than a human. But that’s sometimes the difference between a project getting done and a project getting pushed to the side and never getting attention. Or something like this: I can’t afford to spend time on this. Let me find an intern. They’ll have to find time, and I’ll get the results in a few days. The delegation penalty is much longer than the time it takes to do the task.

Creating visualizations

I uploaded ChatGPT’s CSV to an app called Noteable and installed the Noteable plugin in ChatGPT. Without context or explanation, I asked ChatGPT to create some graphs with data from the CSV. I literally said, “Create some graphs and visualizations.”

This was the first graph it created:

Interesting, especially with the lack of direction. There are perhaps a number of different directions that ChatGPT could have taken the “Create some graphs” command. But this one is a logical way to gain some insight into the data.

But I wasn’t satisfied with that graph because the industries weren’t sorted. So I asked, “For the chart ‘Number of Companies in Each Industry’ can you sort the industries by most to least?” ChatGPT’s response:

I didn’t have to learn how to sort the industries. I didn’t have to google for documentation. I didn’t have to follow a series of step-by-step instructions that (inevitably) would have some discrepancy between the version of the software the instructions were written for and the version of the software I was using. I asked ChatGPT to do it, and it did it. I probably could have phrased my request in a thousand slightly different ways and achieved the same result.

With that graph sorted, I wanted to switch gears. I asked, “Can you create a chart showing the number of companies in each state, sorted by most to least?” Response:

This is stuff that any of us could do in Excel. Create a pivot table, sum the rows, select the cells you want to create a chart from, select the type of chart you want to create, and on and on. There’s usually something in that process that I don’t know or forget how to do. It takes time to google to find the right instructions.

* * *

So what’s the point? I’ve managed to use advanced artificial intelligence to create a CSV from scraped text and then create a few visualizations of that data. That hardly seems revolutionary. But it’s not what I did that matters, it’s how I did it. My inputs were messy and imprecise, but ChatGPT figured it out and produced responses that were expected.

To make the most effective use of the AI-based tools that we will have at our disposal, we’ll have to rethink how we use them and discard the rigid thinking that has defined traditional computing. LLMs have the capacity to evaluate text in a manner that is logically similar to how humans evaluate text. They can make sense of the errors and the patterns that might not be defined explicitly. We can use that to our advantage to take on projects that otherwise wouldn’t be possible.

As LLMs augment interactions between humans and computers by adding another layer of abstraction, humans will be able to extend their skill sets to enhance their capabilities. Whereas in the past, an executive might have to ask a data scientist to pull data and present it in a particular way, in the future (and the future is now!), an LLM should be able to extend that executive’s skill set to pull the data and present it. Those capabilities will unlock efficiency gains that will empower businesses to analyze data and make better-informed decisions.

Once we start thinking outside the rigid box of traditional computing, we’ll discover ways that LLMs can enhance our skills and how we do business. There’s a quote that’s been floating around and I cannot find the precise wording and I do not know who to attribute it to. But it goes something like this: “I don’t worry about being replaced by an LLM. I worry about being replaced by a human who knows how to use an LLM.”[2]

* * *

[1] I did try to have ChatGPT to go to the website, load the page, and create a table on its own. No coding necessary! And that worked until ChatGPT ran out of steam, saying it could only offer me the first few rows of data. Another problem was that the table was in four parts, which loaded via Ajax, that didn’t seem to get along with ChatGPT’s browser.

[2] I must acknowledge Terence Tao’s into to his AI Anthology, “Embracing change and resetting expectations,” which contains similar themes as my post above, although in a much more erudite style.