Foreword by ChatGPT:
In the realm of human achievements, few fields have captured our collective imagination and pushed the boundaries of possibility as profoundly as artificial intelligence (AI). From its humble beginnings to its modern-day prominence, the history of AI is a testament to human ingenuity, innovation, and the relentless pursuit of creating machines that can simulate intelligent behavior.
In this article, we embark on a captivating journey through the annals of AI, exploring its fascinating evolution and pivotal milestones. We delve into the moments that sparked the birth of AI as a scientific discipline and examine the groundbreaking breakthroughs that have shaped its trajectory. We witness the tireless efforts of brilliant minds who dared to imagine a future where machines could think, learn, and reason like humans.
From the pioneering work of early visionaries such as Alan Turing, John McCarthy, and Marvin Minsky, to the boom and subsequent winter of AI research, we uncover the triumphs and setbacks that have shaped the field. We explore the rise of expert systems, machine learning, and neural networks, revolutionizing the way we approach AI and paving the way for transformative applications across various industries.
The journey of AI is filled with remarkable stories of perseverance and innovation. We witness the birth of intelligent assistants, self-driving cars, and advanced robotics, ushering in a new era where AI interacts seamlessly with our daily lives. We also delve into the ethical challenges and considerations that arise as AI becomes more integrated into society, underscoring the importance of responsible development and deployment.
As we reflect on the rich history of AI, it becomes clear that this is a field of relentless curiosity and untapped potential. The path to true artificial intelligence may still lie ahead, but the milestones we have achieved thus far serve as beacons of progress, inspiring the next generation of AI enthusiasts, researchers, and engineers.
So, join us as we embark on this captivating expedition, tracing the footsteps of those who dared to dream of creating machines that can think. From its humble origins to its limitless future, the history of AI is a testament to the indomitable spirit of human intellect and the unquenchable thirst for knowledge.
Prepare to be enthralled by the captivating story of AI, where science fiction meets reality, and the impossible becomes possible.
Happy reading!
The preceding was generated by an AI, I asked ChatGPT to write a foreword for my essay, and it happily obliged.
Physicist Alastair Rae observed, “By the end of the nineteenth century it seemed that the basic fundamental principles governing the behavior of the physical universe were known…” before the century ended however, radioactivity was discovered, which upended everything that was known about the elements that make up the universe. A century of technological changes has seen humans on the moon, millions of people flying the skies across the globe every hour; hundreds of thousands of hours of moving images transmitted to devices in the palm of our hands, that can also provide information to anyone about everything, known through the entirety of our existence. We are now at the threshold of the next major technological change in our lifetimes. IBM Watson, Amazon Alexa, Google Assistant, Apple Siri, and now with OpenAI’s Dall-E 2 and Chat GPT, the age of AI has arrived and we can only imagine what more Artificial Intelligence has in store.
So what exactly is Artificial Intelligence?
First we have to understand that AI is a field, and not just some evil robot ready to take over our planet, which “combines computer science and robust datasets” to solve problems. Artificial Intelligence also envelopes certain aspects of machine and deep learning that are often referenced alongside AI. “These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.”
But how did we get this definition and what about AI in general?
Throughout the modern age “science fiction familiarized the world with the concept of artificially intelligent robots.” Introduced by The Wizard of Oz in 1907 and most popularly known in Pinocchio, by the mid 20th century the concept of “artificial intelligence” was widely known but not understood in the aspect as we do today. The man widely accepted to be credited with defining the basic form of AI today, Alan Turing first showcased the possibility of “robots”, now AI, in his paper Computing Machinery and Intelligence in which he presented how to construct intelligent machines. However what truly differed Turing’s concept from others was his proposed way of testing their so-called “intelligence”. In this he placed the building blocks in what would later revolutionize the world as we knew it.
Though we now know Alan Turing as a great Cryptographer and a highly respected computer scientist, in the 1950s a young 37 year old theorist proposing a completely new way of creating not to mention thinking about Artificial intelligence was not nearly as respected as someone with decades of experience. Unfortunately, Turing was unable to test his theory himself due to the primitive state of computation technology and lack of funds and support to continue his work.
The Golden Age
The golden age in which computers had the most progress in not only performance but also in affordability occurred between the late 1950s to early 1980s. In this age computers could store immeasurable amounts of information than what Turing could have possibly imagined and could process that information even faster. It was in this age that we could see many projects with some resemblance of what we know AI to be today. “Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively.” An early functioning example of machine learning is Frank Rosenblatt’s Mark 1 Perceptron. This computer was based on a human neural network and as such, learned through trial and error. “These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions.” A similar plan for defense agencies to fund AI and Machine learning projects would later result in the modern world wide web.
Regardless of its popularity among many defense agencies and insider scientists and researchers, at this point AI served no purpose to the general public. However this would soon change in the late 1990s through the early 2000s when many significant achievements in AI and Machine learning were achieved, such as when Deep Blue, IBM’s chess playing program defeated the reigning world chess champion and renowned grandmaster, Gary Kasparov. “This highly publicized match was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle.” Ending with the launch of Kismet, Alan Turing’s Humanoid AI dream was fulfilled with it being able to replicate and show human emotions, this was the furthest development we have seen so far.
The Build-up
Even though it may seem that AI has made leaps and bounds in the past couple of years, that is simply not true. “It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem.” With computers such as Apple’s M2 Studio containing upwards of 2 terabytes of RAM, processing power, memory and storage space is simply not an issue and with this type of technology becoming more mainstream and accessible many companies like OpenAI have started to pop-up and boom in popularity.
Following IBM phenomenal success with Deep Blue, yet another, even better AI stepped up to the mat. “In an historic event, in February 2011 IBM’s Watson computer competed on Jeopardy! against the TV quiz show’s two biggest all-time champions. Watson is a computer running software called Deep QA, developed by IBM Research.” Though slow at first, almost losing his grip on victory mid-round, he quickly popped right back up, easily tripling the score of the others. This monumental event showcased the ideal ability that Turing envisioned, to change and make decisions. Halfway through the game, Watson realized that its designated process of answering questions was too slow and as such changed mid-round on its own accord. “On the back of this very public success, in 2011 IBM turned Watson toward one of the most lucrative but untapped industries for AI: healthcare.”
The Gold Rush, or is it?
We have arrived at the AI Gold Rush, “an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process.” Technology companies around the world are working on monetizing AI. With many applications for AI in our everyday lives, personalized entertainment, assisting with editing, cheat detection and to cheat on schoolwork, we cannot imagine a day in our lives without interaction with AI, passive or not.
Unfortunately it’s not all rosy, after a very turbulent decade following its success in Jeopardy and promising future in healthcare IBM Watson was eventually “sanded down for parts” in 2022. AI’s role in defamation took a turn for the worse in late 2020. Many actors, celebrities and unfortunately one poor cheerleader all enraged and copied by AI assisted deepfake and voice modifications. The issue made it to the public eye in early 2021, when the famous actor, Tom Cruise was “deepfaked” by a content creator on tiktok, using his voice changer to completely fool the world until later revealing himself to be a young, upcoming creator in deep fakes.
AI based recruiting tools have also been found to increase bias on hiring, Amazon scrapped a recruiting tool in 2018 as it was found to be biased against women candidates. To add even more insult to injury, In 2020, a Detroit man was wrongfully arrested with the aid of facial recognition. While facial recognition technologies claim a 90% accuracy, it gives a false sense of infallibility to law enforcement agencies that have spent millions purchasing and implementing them.
Future
On the lighter side of things, a groundbreaking(literally) AI-powered weedwacker, yes that’s right, has recently been sold out for the time being to industrial farms around the world. “In 2021, Carbon Robotics unveiled the third-generation of its Autonomous Weeder, a smart farming robot that identifies weeds and then destroys them with high-power lasers. The company now has taken the technology from that robot and built a pull-behind LaserWeeder — and it kills twice as many weeds.” Though minimal this slight change in agriculture shows a very promising and hopeful future for AI growth and assistance in our daily lives. Foreseeably we can expect advancement in auto-pilot driving capabilities and auto-translate, such as AI assisted driving for those with disabilities or the elderly as well as real-time translation from your mobile phone, both of which are already underway in Google’s Waymo and Translate divisions respectively.
As far as evil robots taking over the world, I guess we’ll just have to wait and see.
My Sources (chronological order):