Quantcast
Channel: Stories by CP Lu, PhD on Medium
Viewing all articles
Browse latest Browse all 31

Will AGI Merit the Turing Award?

$
0
0

A Thought Experiment Exploring the Natures of Artificial and Human Intelligence

Image by the Author using ChatGPT-4

The debate over Artificial General Intelligence (AGI) is intensifying, with discussions centered on when, if ever, it will surpass human intelligence. Elon Musk optimistically predicts that AGI will outsmart humans by 2025. In contrast, Gary Marcus, highlighting significant capability gaps, has countered Musk’s forecast with a $10 million bet against the imminent dominance of AGI. Marcus argues that current AI lacks the ability to perform many intellectual tasks that humans manage effortlessly, such as navigating unfamiliar environments, quickly mastering new games, or creating meaningful and coherent narratives without human guidance.

Amidst this debate, skeptics like Daniel Warfield challenge the core premise, criticizing the anthropomorphization of AGI and questioning its potential to truly exceed human intelligence in all domains. Warfield’s skepticism underscores the broader implications of attributing human-like intelligence to machines, urging caution against overestimating their current and near-future capabilities.

This conversation leads to an intriguing thought experiment:

Could AGI one day be considered alongside top computer scientists for the Turing Award, often seen as the Nobel Prize of computing?

The Turing Award honors substantial contributions to the computing field, prompting us to explore whether a computing system could ever autonomously assess its own capabilities. This scenario probes the fundamental differences between artificial and human intelligence, despite their seeming convergence showcased by recent AI advancements.

Artificial and Human Intelligence

The ongoing debate about AGI surpassing human capabilities often reflects on I.J. Good’s 1948 skepticism, which challenged the direct comparison of human and mechanical intelligence. Good, a British mathematician and contemporary of Alan Turing, famously questioned the prevailing assumptions with a paradoxical question:

Can you pinpoint the fallacy in the following argument? ‘No machine (meaning any particular machine) can exist for which there are no problems that we (meaning every type of machine) can solve, and it (that particular machine) can’t.’ But we are machines: a contradiction.

He highlighted a fundamental inconsistency in equating artificial with human intelligence.

Today, the scaling law in AI suggests that significant enhancements in tackling complex problems could stem from increased computational power and extensive data collection. The remarkable achievements of AI, such as passing bar exams, creating engaging one-minute videos from text prompts, and solving International Mathematical Olympiad-caliber math problems, often equaling or surpassing human performance — propel optimism that AI might eventually exceed human intelligence.

However, acknowledging the potential of AGI to surpass human capabilities requires a deep appreciation of the broad spectrum of human intelligence, a trait that remains challenging to define and quantify. This complexity echoes physicist Richard Feynman’s view on machine intelligence:

If we cannot define what intelligence is, we cannot say whether machines can be intelligent.

Similarly, without a clear and comprehensive understanding of human intelligence, claiming that AGI will surpass it remains uncertain. Intriguingly, the ongoing surprises in the field may stem more from our evolving understanding of humans, rather than artificial, intelligence.

Beyond Scaling: AGI as Abstract Reasoners?

This leads us to reconsider the role of AGI not just as problem solvers, but as entities capable of abstract reasoning. The challenge for AGI to surpass human intelligence might not solely be about solving increasingly complex problems but also understanding the nature of problem-solving itself. By examining the landmark achievements of Stephen Cook, we gain invaluable insights into a distinct aspect of human intelligence known as abstract reasoning, as demonstrated by top computer scientists.

Stephen Cook’s groundbreaking 1982 Turing Award-winning work fundamentally changed how we approach problem-solving on computers. In his influential paper, “The Complexity of Theorem-Proving Procedures,” Cook introduced the P vs. NP problem (Cook, 1971). P and NP represent two classes of problems tackled by computers. P problems are easy to solve, while NP problems are inherently difficult. However, verifying a potential solution for an NP problem is easy. The P vs. NP problem is not a problem to be solved per se; it’s an abstraction that addresses the gap between solving P and NP problems.

Within the NP class, Cook identified NP-complete problems as the most formidable challenges. His crucial insight suggested that efficiently solving any one NP-complete problem would imply that P equals NP, indicating that complex problems, typically requiring exhaustive exploration of exponentially many potential solutions, could be solved without needing to examine each possibility. This breakthrough could revolutionize problem-solving across diverse fields such as computer science, mathematics, economics, and biology. This makes the NP class particularly intriguing because the solutions, while seeming tantalizingly within reach, remain elusive. Today, the P vs. NP problem continues to be one of the most significant and unresolved challenges in the realms of computer science and mathematics.

A notable pilot study using GPT-4, a large language model, examined its ability to engage in complex, Socratic reasoning about the P vs. NP problem (Dong et al., 2023). While GPT-4 adeptly handles such intricate theoretical topics, it fundamentally diverges from human researchers in two key areas:

  1. Human Guidance: The study showed that these models often need human input to guide the conversation effectively. However, the kind of breakthroughs exemplified by Stephen Cook’s formulation of the P vs. NP problem generally require creative insights that current AI technologies cannot autonomously replicate.
  2. Understanding vs. Initiative: GPT-4 can understand and discuss the complexities of sophisticated topics by referencing a vast database of information. However, it remains uncertain whether it has the initiative to pioneer a groundbreaking paradigm, as Stephen Cook did with the P vs. NP problem.

Given that AI primarily excels at pattern recognition and data optimization, its abilities starkly contrast with the abstract reasoning and profound conceptual leaps that characterized Stephen Cook’s revolutionary work. This stark divergence brings us to a pivotal inquiry:

Is scaling AI’s computational power and data handling sufficient to replicate the kind of groundbreaking, original thought that merits accolades such as the Turing Award? Or do such profound intellectual achievements remain exclusively within the realm of human creativity?

Beyond Machines: AGI as Meta-Thinkers?

Building on this, we must consider whether advancements in AI, extending beyond mere scaling, can enable AGI to introspect on its own nature and limitations, similar to the capabilities of top computer scientists and mathematicians. This query revolves around ‘meta-thinking’ — the ability to comprehend the boundaries of thought itself, a characteristic exemplified by pioneers such as Kurt Gödel and Alan Turing, who laid the foundations for modern computing.

Can AGI evolve to not only solve problems but also understand and question the very processes that underlie its thinking?

Gödel and Turing redefined how we see mathematical thought with their work on formal systems related to specific branches of mathematics, such as arithmetic. These systems are built on axioms, or rules taken as true, and logical steps that derive further truths or theorems. They posed two major questions for any formal system: first, is it consistent and complete, meaning can every mathematical truth in the branch be logically derived without contradictions; and second, can the truth of any statement in the branch be figured out mechanically — through a process or method that operates according to predefined rules and algorithms, without human intervention?

Gödel’s Incompleteness Theorems tackled the first question. He showed that any consistent formal system capable of handling basic arithmetic will always have some truths that can’t be proven within the system itself — there will always be more to discover that we can’t derive just by following the rules (Gödel, 1931). His second theorem took it further, showing that such systems can’t prove their own consistency from within, challenging the very foundations of systematic mathematics.

After Gödel established the inherent limitations within formal systems, Alan Turing explored a different angle, tackling the question of computational limits. He demonstrated that no purely mechanical method — meaning no algorithm or computational process that follows a set series of steps — can decide the truth of every possible statement within these systems. His famous paper, “On Computable Numbers, with an Application to the Entscheidungsproblem” (Turing, 1936), illustrated that some problems are fundamentally beyond the reach of algorithmic computation, establishing the limits of what machines can solve.

These insights define the boundaries of what can be achieved with current machines and formal systems, but they also highlight the dynamic nature of human intelligence, which continually evolves, allowing us to transcend these limitations and innovate beyond previous and current inventions.

Given the limitations inherent to AGI, as with any formal system, might it be our very humanity — rather than our technological creations — that continues to evolve and surprise, potentially meriting accolades such as the Turing Award?”

Conclusion

In our discussion, we have highlighted two distinct qualities that are prevalent among top computer scientists: abstract reasoning and meta-thinking. These qualities extend well beyond the technical skills typically associated with programming. Abstract reasoning deeply explores the essence of problem-solving, focusing not just on specific issues but on the nature of problem-solving itself. Meta-thinking involves profound reflection on the thinking process itself, examining the foundational aspects of cognitive operations. This raises an essential question:

Can AGI truly be considered to surpass human intelligence if it does not demonstrate the caliber of cognitive abilities seen in Turing Award laureates?

Moreover, some might question why foundational computer science has emphasized what are impossible or infeasible, constraints that also apply to AI. Recognizing these limitations should not be seen as resistance to AGI development or as underestimating its potential societal impacts. Instead, this understanding should inspire us to adopt a broader, more constructive perspective. By embracing the principles derived from foundational studies in computing — such as the inherent impossibilities and infeasibilities identified by pioneers like Gödel, Turing, and Cook — we effectively move our goalposts to infinity. With vast, uncharted territories to explore, we are encouraged to continually redefine what is possible.

References

Cook, S. A. (1971). The complexity of theorem-proving procedures. Proceedings of the Third Annual ACM Symposium on Theory of Computing, 151–158.

Dong, Q., Dong, L., Xu, K., Zhou, G., Hao, Y., Sui, Z., & Wei, F. (2023). Large Language Model for Science: A Study on P vs. NP. arXiv preprint arXiv:2309.05689.

Gödel, K. (1931). On formally undecidable propositions of Principia Mathematica and related systems I. Translated by B. Meltzer (1962). Edinburgh: Oliver and Boyd.

Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42(1), 230–265.


Viewing all articles
Browse latest Browse all 31

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>