Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 04-10-2016, 02:44 PM   #81
blueski
Scoring Winger
 
Join Date: Dec 2014
Exp:
Default

You can make anyone say anything you want with this technology. Wonder how long before video evidence becomes useless (see link below)

Last edited by blueski; 04-10-2016 at 04:12 PM.
blueski is offline   Reply With Quote
Old 04-10-2016, 03:33 PM   #82
Dan02
Franchise Player
 
Dan02's Avatar
 
Join Date: Jun 2004
Location: Calgary
Exp:
Default



fixed link
Dan02 is offline   Reply With Quote
The Following User Says Thank You to Dan02 For This Useful Post:
Old 05-10-2016, 11:04 AM   #83
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Interesting article. Presents a decent albeit a little extreme view of both sides of the AI debate.

Should We Be Afraid of AI?

Quote:
Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.
I think his view of Singularitarians is a bit too strawman-ish for my liking:
Quote:
Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely ‘could’, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.
And while he rightly constrains AIs by the same laws/conditions that constrain Turing Machines, his view that AI cannot achieve true intelligence shows his bias that the human brain is definitely not an extremely efficient Turing Machine. I think this is still an open question.

Quote:
Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine.
Yes, AlphaGo formulated patterns based on millions of Go games. But perhaps we do the same - but more efficiently. We don't need millions of Go games, but we are able to pull strategies and pattern match derived from multiple other sources and apply them to problems in other domains. Is AI fundamentally unable to do this, or have we just not figured out a way to implement this well?
psyang is online now   Reply With Quote
Old 05-16-2016, 09:19 AM   #84
DynastyEra
Farm Team Player
 
Join Date: Jan 2016
Exp:
Default

I've always generally been interested in the science side of AI and the hypothetical ramifications of either a utopian society that is self sustaining with AI taking over the menial tasks, or one that will see humanity as a road bump.
This (long) yet satisfying look at what the author sees as something that needs to be talked about, is worth the read. I'll just leave it here..

http://http://waitbutwhy.com/2015/01...olution-1.html
http://http://waitbutwhy.com/2015/01...olution-2.html
DynastyEra is offline   Reply With Quote
The Following User Says Thank You to DynastyEra For This Useful Post:
Old 05-16-2016, 10:00 AM   #85
psyang
Powerplay Quarterback
 
Join Date: Jan 2010
Exp:
Default

Quote:
Originally Posted by DynastyEra View Post
I've always generally been interested in the science side of AI and the hypothetical ramifications of either a utopian society that is self sustaining with AI taking over the menial tasks, or one that will see humanity as a road bump.
This (long) yet satisfying look at what the author sees as something that needs to be talked about, is worth the read. I'll just leave it here..

http://waitbutwhy.com/2015/01/artifi...olution-1.html
http://waitbutwhy.com/2015/01/artifi...olution-2.html
Fixed the links - they weren't quite properly formatted.
psyang is online now   Reply With Quote
The Following User Says Thank You to psyang For This Useful Post:
Old 09-30-2022, 02:27 PM   #86
Macman
Self Imposed Retirement
 
Join Date: Dec 2020
Location: Calgary
Exp:
Default

Elon Musk to reveal friendly robot Optimus today.

Musk will take the stage at Tesla’s AI Day to reveal details about the cyborg dubbed Optimus, which he claims will revolutionize physical work.

If it materializes, Optimus could initially disrupt manufacturing jobs that make up roughly 10 percent of U.S. labor, or $500 billion in yearly wages, Gene Munster, managing partner of Loup Ventures, wrote in an analysis.


Per the Washington Post.

Last edited by Macman; 09-30-2022 at 02:30 PM.
Macman is offline   Reply With Quote
Old 09-30-2022, 02:29 PM   #87
Knut
 
Knut's Avatar
 
Join Date: Oct 2002
Exp:
Default

^ It probably will not be able to discern between a child and bag of leaves. Throw the kid into the compost bin.
Knut is online now   Reply With Quote
Old 09-30-2022, 06:42 PM   #88
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

Quote:
Originally Posted by Knut View Post
^ It probably will not be able to discern between a child and bag of leaves. Throw the kid into the compost bin.
That's a feature, not a bug.
Fuzz is offline   Reply With Quote
Old 09-30-2022, 07:30 PM   #89
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

There are many ways to compare human intelligence with artificial intelligence, but one key distinction is that humans are conscious while artificial intelligence is not. This means that humans are aware of their own thoughts and experiences, while artificial intelligence is not. This difference is significant because it means that humans can understand and think about their own thoughts and experiences, while artificial intelligence cannot. As a result, the Turing test is insufficient to measure consciousness.

Or at least that's what GPT-3 had to say when I asked it to write a paragraph on the topic.
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 09-30-2022, 07:33 PM   #90
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

GPT-3 certainly knows about the Oilers anyways.

Quote:
The Edmonton Oilers are an embarrassment to the NHL because they are a terrible hockey team. They have no offense, no defense, and no goaltending. They are a laughingstock, and their fans are some of the most clueless in the league.
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 09-30-2022, 08:20 PM   #91
Josh
Scoring Winger
 
Josh's Avatar
 
Join Date: Nov 2003
Location: Beaverton, Oregon
Exp:
Default

Quote:
Originally Posted by Knut View Post
^ It probably will not be able to discern between a child and bag of leaves. Throw the kid into the compost bin.

Tesla owner here, wish they'd work on making their cruise control (autopilot) work correctly and not undertake building robots etc.

I otherwise enjoy the car but can't stand Elon.
Josh is offline   Reply With Quote
Old 10-12-2022, 11:33 AM   #92
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

A 20-minute podcast of Joe Rogan interviewing Steve Jobs where the dialogue and voices are all entirely AI generated. It's pretty poor quality dialogue, but similar attempts probably won't be in the not so distant future. Kind of cool.

https://podcast.ai/
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 10-13-2022, 06:38 PM   #93
topfiverecords
Franchise Player
 
topfiverecords's Avatar
 
Join Date: Feb 2010
Location: Hyperbole Chamber
Exp:
Default

Quote:
Originally Posted by JohnnyB View Post
A 20-minute podcast of Joe Rogan interviewing Steve Jobs where the dialogue and voices are all entirely AI generated. It's pretty poor quality dialogue, but similar attempts probably won't be in the not so distant future. Kind of cool.

https://podcast.ai/
I want this in the future to all you to choose your broadcast crew of choice for sports. Bob Cole and Harry Neale doing games in 2030.
topfiverecords is online now   Reply With Quote
Old 10-13-2022, 07:22 PM   #94
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

Quote:
Originally Posted by topfiverecords View Post
I want this in the future to all you to choose your broadcast crew of choice for sports. Bob Cole and Harry Neale doing games in 2030.
Good idea. I would love to watch playoffs games with a tandem of Peter Maher and Ed Whalen calling the games.
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 11-03-2022, 02:59 PM   #95
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

This is very cool. Looking forward to the time when I can have a virtual assistant like this.

__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 11-21-2022, 11:02 PM   #96
JohnnyB
Franchise Player
 
JohnnyB's Avatar
 
Join Date: Mar 2006
Location: Shanghai
Exp:
Default

Always surprises me that I need to bump this thread from pages back. Would have though AI was a more generally interesting topic.

Anyways, sounds like GPT-4 could be coming out as early as next month or the start of next year, and that it may be hundreds of times as powerful as GPT-3 and be just as big a leap forward as it was from GPT-2 to GPT-3. If this is the case, it's going to be wild. Very exciting.

https://thealgorithmicbridge.substac...silicon-valley
__________________

"If stupidity got us into this mess, then why can't it get us out?"
JohnnyB is offline   Reply With Quote
Old 11-22-2022, 01:16 AM   #97
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

I don't know if this belongs, here, but I was watching a documentary on the Terminator and they bought up the Paperclip problem, and I thought it was completely fascinating.

https://cepr.org/voxeu/columns/ai-and-paperclip-problem

Quote:
Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.
Quote:
The notion arises from a thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. Bostrom was examining the 'control problem': how can humans control a super-intelligent AI even when the AI is orders of magnitude smarter. Bostrom's thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips.
It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat (think The Terminator).
This AI is much smarter than us, so it is likely to win that battle. We have a situation in which an engineer has switched on an AI for a simple task but, because the AI expanded its capabilities through its capacity for self-improvement, it has innovated to better produce paperclips, and developed power to appropriate the resources it needs, and ultimately to preserve its own existence.
Quote:
If an AI can simply acquire these capabilities, then we have a problem. Computer scientists, however, believe that self-improvement will be recursive. In effect, to improve, and AI has to rewrite its code to become a new AI. That AI retains its single-minded goal but it will also need, to work efficiently, sub-goals. If the sub-goal is finding better ways to make paperclips, that is one matter. If, on the other hand, the goal is to acquire power, that is another.
The insight from economics is that while it may be hard, or even impossible, for a human to control a super-intelligent AI, it is equally hard for a super-intelligent AI to control another AI. Our modest super-intelligent paperclip maximiser, by switching on an AI devoted to obtaining power, unleashes a beast that will have power over it. Our control problem is the AI's control problem too. If the AI is seeking power to protect itself from humans, doing this by creating a super-intelligent AI with more power than its parent would surely seem too risky.
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 11-22-2022, 06:27 AM   #98
Fuzz
Franchise Player
 
Fuzz's Avatar
 
Join Date: Mar 2015
Exp:
Default

https://www.decisionproblem.com/paperclips/index2.html

You really need to play the game(several times!) to experience the paperclip reality in all its horror. Prepare to lose some life-hours.
Fuzz is offline   Reply With Quote
The Following 2 Users Say Thank You to Fuzz For This Useful Post:
Old 11-22-2022, 09:56 AM   #99
CaptainCrunch
Norm!
 
CaptainCrunch's Avatar
 
Join Date: Jun 2002
Exp:
Default

Quote:
Originally Posted by Fuzz View Post
https://www.decisionproblem.com/paperclips/index2.html

You really need to play the game(several times!) to experience the paperclip reality in all its horror. Prepare to lose some life-hours.
Yeah I started playing it last night, and two hours later I realized it was 2 in the morning.
__________________
My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!
CaptainCrunch is offline   Reply With Quote
Old 11-22-2022, 11:40 AM   #100
Russic
Dances with Wolves
 
Russic's Avatar
 
Join Date: Jun 2006
Location: Section 304
Exp:
Default

Quote:
Originally Posted by JohnnyB View Post
Always surprises me that I need to bump this thread from pages back. Would have though AI was a more generally interesting topic.

Anyways, sounds like GPT-4 could be coming out as early as next month or the start of next year, and that it may be hundreds of times as powerful as GPT-3 and be just as big a leap forward as it was from GPT-2 to GPT-3. If this is the case, it's going to be wild. Very exciting.

https://thealgorithmicbridge.substac...silicon-valley
I think it's a bit like compound interest... at some point the numbers and consequences get so involved that people's minds just nope out of it. AI is moving so quickly now and will impact so many facets of life, I think most people just tune it out. On top of that, it's not exactly accessible to the laymen who's only half-interested.

What's wild to me is there's this thing that could rank alongside the discovery of fire in terms of altering humanity, and we're all going to witness it explode in the next 5 years. 200,000 years of humans and we get to see this happen. It's insane.

I used it for work for the first time the other day. Nothing profound, just used Dall E to take a generic image and widen it for a web banner. Even something as minuscule as that took me by surprise. It was up there with the most impressive things I've seen in tech.
Russic is offline   Reply With Quote
Reply

Tags
they will overtake us


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 09:39 PM.

Calgary Flames
2023-24




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021