Insights, thoughts, and experiences in the world of software development.
September 26, 2024
Artificial Intelligence is transforming the landscape of web development...
Artificial Intelligence is transforming the landscape of web development. AI is now woven into nearly every facet of the field, automating tasks that once had to be done manually, thanks to the rise of popular, free online tools.
Whenever I talk to family or friends outside the computer science world, the first thing they ask is, “So… AI… pretty crazy, right?” They look at me, expecting a clear reaction. Will I say, “Nah, it’s overhyped”? Or, “Oh yeah, AI is going to completely change the world”?
Having actively used AI tools like ChatGPT since their release, I’ve gathered a lot of insight into what these products actually do. To those unfamiliar with them, they might seem like indescribable wonders of technology. And in some ways, they are; ChatGPT represents a genuine paradigm shift. Why else would so many massive tech companies scramble to invest billions just to keep up? But the reality is more complex.
Back in 2019, I came across the subreddit “r/SubSimulatorGPT2.” This was a Reddit community populated entirely by bots running the then open-source GPT-2 language model. No human posts were allowed; the subreddit simulated a real Reddit community with only GPT-2 bots trained on internet data.
The subreddit was hilarious. Most posts and comments were gibberish, but every now and then gems like “My cat and I are getting f**king divorced,” “Life pro tip: don’t help people with their day,” and the creepily self-aware “We are likely created by a computer program” would surface, gaining massive upvotes from amused spectators.
At the time, there was no immediate use for this technology beyond entertainment. Yet, behind the laughs, there was a sense of awe. Sure, the bots mostly weren’t fooling anyone into thinking they were human, but occasionally they displayed an eerie, almost sentient quality that hinted at a larger breakthrough on the horizon.
When GPT-3 and ChatGPT were released in 2022, that breakthrough became clear. Companies like Alphabet and Meta weren’t surprised that AI had arrived, but they were caught off guard by just how far large language models (LLMs) had come, especially considering how rudimentary GPT-2 had seemed just a few years prior.
LLMs have one primary goal: to sound correct. They take in vast amounts of information, train on that data, and then output seemingly new data designed to mimic accurate responses. When you ask ChatGPT a question, it doesn’t comb through a massive database for the right answer. Instead, it uses its training to generate an answer that sounds as plausible as possible. And most of the time, sounding correct means being correct, so ChatGPT's responses tend to be accurate.
But as we’ve seen, that’s not always the case. The viral question, “How many ‘r’s are in the word ‘strawberry’?” (to which ChatGPT often replies “2, of course”), highlights this limitation. ChatGPT doesn’t engage in real cognition—it generates responses that seem right, regardless of whether they actually are.
I’ve asked ChatGPT complex math questions, and it consistently delivers detailed, confident answers. They sound right. But about half the time, upon closer inspection, they’re completely wrong. There’s no true understanding behind its responses—just an impressive act of mimicry.
That’s not to say ChatGPT is useless—far from it. I use it every day and will likely use it even more as time goes on. The technology is improving, with new models being refined daily. However, we’re not yet at the point where you can fully trust AI to solve complex problems on its own.
This holds true in the world of coding as well. I’ve used GPT-4 and GitHub Copilot extensively in my coding projects and have probably cut down my work time by 50%. But even with these AI models designed specifically for coding, they often trip up. They hallucinate issues that don’t exist, break features that were working, and struggle to parse larger files with thousands of lines of code. Despite the AI’s assistance, I still play a vital role in the process—I still need to understand my code and be able to manually fix errors that the AI can’t.
In the end, I believe we’re still many years away from a full AI revolution, even in fields like computer science, where AI is most advanced. I don’t think incremental improvements to existing AI models will be enough to replace human involvement entirely. For that to happen, we’ll need a leap as significant as the one from GPT-2 to GPT-3.
Until then, AI remains an incredibly useful tool—one that eliminates a lot of the busy work—but it’s still just that: a tool. It won’t replace the need for human expertise anytime soon. And for now, that’s a good thing. It reminds us that technology works best when we stay in the loop, guiding it to achieve the best results.