I am obsessed with the rapid advancements in artificial intelligence (AI). However, so much dystopian doom and gloom surrounding the innovation of ChatGPT4 and Dall-E have raised numerous questions and concerns about the implications of this technology on our lives.
AI has been around for over 50 years; this is not a new phenomenon, but it has come a long way in recent years. Soon, emotions will be incorporated into AI systems, making them seem even more human-like. As a digital agency owner, I am interested in how we interact with technology and, more importantly, how we perceive AI as a result of these developments. Furthermore, like all innovations and evolution, I want to understand how I can apply these technologies for the better of humanity. AI for good!
AI’s Evolution and Potential Risks
As AI continues to evolve, it is crucial to consider its development’s potential risks and limitations. For example, adding emotions to AI systems may lead to people developing emotional bonds with machines, further complicating the relationship between humans and technology. In addition, AI can exhibit remarkable intelligence in certain areas while remaining surprisingly inadequate in others. This phenomenon, often called “dumb smartness,” is a byproduct of the engineering trade-offs required to optimise AI systems for specific tasks.
General Artificial Intelligence and Artificial Aliens
The concept of general artificial intelligence – a single AI system that can outperform humans in virtually any intellectual task – may seem like an inevitable development to some. However, as someone deeply involved in this field, I believe this idea is more of a fantasy than a reality. So instead, I envision a future where thousands of specialised AI systems, or artificial aliens, coexist and perform specific functions.
Like Spock from Star Trek, these AI may exhibit high intelligence in certain areas but remain distinctly non-human. The benefit is that they don’t think like us, allowing them to approach problems and tasks from a unique perspective and become an invaluable human resources in various fields, such as scientific research and problem-solving. Over 150 million humans joined chat GPT in Feb 2023 to use it to help drive changes in writing, thinking, idea generation and event research.
AI Self-Improvement and the Balance of Power
AI systems’ potential to self-improve raises concerns about the balance of power between humans and machines. Some fear AI becoming a dominant force, relegating humans to a subservient role. However, while AI can improve its cognitive abilities, it still requires a connection to the physical world to make a real impact. AI’s influence ultimately depends on humans interpreting and applying the insights it provides.
AI Self-Improvement and the Balance of Power: Real-World Examples
- AlphaGo and AlphaGo Zero, One of the most famous examples of AI self-improvement is Google DeepMind’s AlphaGo, a computer program that plays the board game Go. In 2016, AlphaGo made headlines by defeating the world champion Go player, Lee Sedol. Later, DeepMind introduced AlphaGo Zero, who learned the game entirely by playing against itself without any human intervention. AlphaGo Zero surpassed the original AlphaGo’s performance in a matter of days, demonstrating the power of self-improvement in AI systems. However, while the AI’s abilities in playing Go were impressive, it still required human input and assistance for tasks beyond the game, highlighting the limits of AI dominance.
- OpenAI’s GPT-3 OpenAI’s GPT-3 (Generative Pre-trained Transformer 3) is another example of an AI system with significant self-improvement capabilities. GPT-3 can perform various tasks, such as translation, summarisation, and answering questions, by learning from vast amounts of data. While GPT-3 has demonstrated exceptional language understanding and generation abilities, its applications still depend on human guidance and interpretation. For instance, GPT-3 might generate a well-written report. However, humans must still decide whether the information is accurate, relevant, and appropriate for a given situation.
- Autonomous Vehicles Autonomous vehicles, such as Tesla’s self-driving cars, are AI systems that continually improve their driving skills through machine learning and real-world experience. These vehicles use data from millions of miles of driving to enhance their understanding of road conditions, traffic patterns, and driving behaviour. However, the balance of power between humans and AI in this context remains contingent on human decision-making and oversight. For instance, autonomous vehicles require human approval for updates and safety regulations, and humans must still intervene in cases where the AI system encounters unfamiliar or challenging situations.
- AI in Healthcare AI systems has shown great potential in healthcare, from diagnosing diseases to developing new drugs. For example, DeepMind’s AlphaFold successfully predicted protein structures, a breakthrough that could revolutionise drug development and disease understanding. While AlphaFold’s self-improvement capabilities are impressive, it still relies on humans to validate its predictions and apply the findings to real-world problems. Thus, the balance of power between AI and humans in healthcare remains dependent on human expertise, oversight, and ethical considerations.
These real-world examples illustrate that although AI systems can self-improve and achieve remarkable results, they still rely on human input, interpretation, and guidance. While AI systems have demonstrated remarkable self-improvement capabilities, the power of humans controlling AI will always be the dominant force. Real-world examples such as AlphaGo, GPT-3/4, autonomous vehicles, and AI in healthcare show that AI systems still rely on human input, interpretation, and guidance. Humans play a vital role in ensuring that AI is a valuable tool for humanity, engaging with AI systems responsibly and ethically. As AI advances, society needs to maintain this balance of power, fostering a future where AI augments human abilities and works in harmony with human interests rather than becoming a potential threat.
Regulatory and legal issues arise, particularly when considering the decision-making process of AI systems, such as prioritising the car passenger’s safety over a pedestrian or vice versa. Furthermore, people are still trying to understand that the creators of AI systems need help to fully explain how these systems arrive at their decisions. This unease will grow as AI becomes more sophisticated and closer to achieving a general version. Interestingly, we are less disturbed by the fact that humans often cannot explain their decisions than we are about AI’s ability to present itself. This is because humans, too, are flawed in respect. Yet, we demand more from our creations; we create things to solve a problem; ultimately, they need to be better than us, but that scares us. To address this concern, an AI study field called Explainable AI aims to create AI systems that can explain their decisions. However, even if this is achieved, how AI communicates its reasoning to humans might still be reductive and inaccurate.
As AI systems incorporate emotions, our understanding of and relationship with these technologies will become more complex, pushing us to reevaluate our perception of AI and delve into the possible implications of AI as an emergent life form or intelligence. For instance, one theory proposes that technology is an extension of the same self-organising forces that drive evolution, resembling the patterns observed in biological evolution. Nevertheless, as we progress, we must approach AI cautiously, considering its potential risks and limitations. This will enable us to harness the power of AI to improve humanity while also preparing ourselves for the challenges that this rapidly advancing technology may bring.