Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.
I think his view of Singularitarians is a bit too strawman-ish for my liking:
Quote:
Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.
Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely ‘could’, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.
And while he rightly constrains AIs by the same laws/conditions that constrain Turing Machines, his view that AI cannot achieve true intelligence shows his bias that the human brain is definitely not an extremely efficient Turing Machine. I think this is still an open question.
Quote:
Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.
Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine.
Yes, AlphaGo formulated patterns based on millions of Go games. But perhaps we do the same - but more efficiently. We don't need millions of Go games, but we are able to pull strategies and pattern match derived from multiple other sources and apply them to problems in other domains. Is AI fundamentally unable to do this, or have we just not figured out a way to implement this well?
I've always generally been interested in the science side of AI and the hypothetical ramifications of either a utopian society that is self sustaining with AI taking over the menial tasks, or one that will see humanity as a road bump.
This (long) yet satisfying look at what the author sees as something that needs to be talked about, is worth the read. I'll just leave it here..
I've always generally been interested in the science side of AI and the hypothetical ramifications of either a utopian society that is self sustaining with AI taking over the menial tasks, or one that will see humanity as a road bump.
This (long) yet satisfying look at what the author sees as something that needs to be talked about, is worth the read. I'll just leave it here..
Musk will take the stage at Tesla’s AI Day to reveal details about the cyborg dubbed Optimus, which he claims will revolutionize physical work.
If it materializes, Optimus could initially disrupt manufacturing jobs that make up roughly 10 percent of U.S. labor, or $500 billion in yearly wages, Gene Munster, managing partner of Loup Ventures, wrote in an analysis.
There are many ways to compare human intelligence with artificial intelligence, but one key distinction is that humans are conscious while artificial intelligence is not. This means that humans are aware of their own thoughts and experiences, while artificial intelligence is not. This difference is significant because it means that humans can understand and think about their own thoughts and experiences, while artificial intelligence cannot. As a result, the Turing test is insufficient to measure consciousness.
Or at least that's what GPT-3 had to say when I asked it to write a paragraph on the topic.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
The Edmonton Oilers are an embarrassment to the NHL because they are a terrible hockey team. They have no offense, no defense, and no goaltending. They are a laughingstock, and their fans are some of the most clueless in the league.
__________________
"If stupidity got us into this mess, then why can't it get us out?"
A 20-minute podcast of Joe Rogan interviewing Steve Jobs where the dialogue and voices are all entirely AI generated. It's pretty poor quality dialogue, but similar attempts probably won't be in the not so distant future. Kind of cool.
A 20-minute podcast of Joe Rogan interviewing Steve Jobs where the dialogue and voices are all entirely AI generated. It's pretty poor quality dialogue, but similar attempts probably won't be in the not so distant future. Kind of cool.
Always surprises me that I need to bump this thread from pages back. Would have though AI was a more generally interesting topic.
Anyways, sounds like GPT-4 could be coming out as early as next month or the start of next year, and that it may be hundreds of times as powerful as GPT-3 and be just as big a leap forward as it was from GPT-2 to GPT-3. If this is the case, it's going to be wild. Very exciting.
I don't know if this belongs, here, but I was watching a documentary on the Terminator and they bought up the Paperclip problem, and I thought it was completely fascinating.
Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.
Quote:
The notion arises from a thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. Bostrom was examining the 'control problem': how can humans control a super-intelligent AI even when the AI is orders of magnitude smarter. Bostrom's thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips.
It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat (think The Terminator).
This AI is much smarter than us, so it is likely to win that battle. We have a situation in which an engineer has switched on an AI for a simple task but, because the AI expanded its capabilities through its capacity for self-improvement, it has innovated to better produce paperclips, and developed power to appropriate the resources it needs, and ultimately to preserve its own existence.
Quote:
If an AI can simply acquire these capabilities, then we have a problem. Computer scientists, however, believe that self-improvement will be recursive. In effect, to improve, and AI has to rewrite its code to become a new AI. That AI retains its single-minded goal but it will also need, to work efficiently, sub-goals. If the sub-goal is finding better ways to make paperclips, that is one matter. If, on the other hand, the goal is to acquire power, that is another.
The insight from economics is that while it may be hard, or even impossible, for a human to control a super-intelligent AI, it is equally hard for a super-intelligent AI to control another AI. Our modest super-intelligent paperclip maximiser, by switching on an AI devoted to obtaining power, unleashes a beast that will have power over it. Our control problem is the AI's control problem too. If the AI is seeking power to protect itself from humans, doing this by creating a super-intelligent AI with more power than its parent would surely seem too risky.
__________________
My name is Ozymandias, King of Kings;
Always surprises me that I need to bump this thread from pages back. Would have though AI was a more generally interesting topic.
Anyways, sounds like GPT-4 could be coming out as early as next month or the start of next year, and that it may be hundreds of times as powerful as GPT-3 and be just as big a leap forward as it was from GPT-2 to GPT-3. If this is the case, it's going to be wild. Very exciting.
I think it's a bit like compound interest... at some point the numbers and consequences get so involved that people's minds just nope out of it. AI is moving so quickly now and will impact so many facets of life, I think most people just tune it out. On top of that, it's not exactly accessible to the laymen who's only half-interested.
What's wild to me is there's this thing that could rank alongside the discovery of fire in terms of altering humanity, and we're all going to witness it explode in the next 5 years. 200,000 years of humans and we get to see this happen. It's insane.
I used it for work for the first time the other day. Nothing profound, just used Dall E to take a generic image and widen it for a web banner. Even something as minuscule as that took me by surprise. It was up there with the most impressive things I've seen in tech.