Flash Flash Revolution

Flash Flash Revolution (http://www.flashflashrevolution.com/vbz/index.php)
-   Critical Thinking (http://www.flashflashrevolution.com/vbz/forumdisplay.php?f=33)
-   -   The singularity, ASI, ie: crazy advanced AI (http://www.flashflashrevolution.com/vbz/showthread.php?t=146777)

Cavernio 02-22-2017 02:07 PM

The singularity, ASI, ie: crazy advanced AI
 
So this discussion should be centered around questions and ideas brought about from the pair of these 2 enormous articles, not just AI in general:

http://waitbutwhy.com/2015/01/artifi...olution-1.html
the link to the second one is at the bottom of this page but also here
http://waitbutwhy.com/2015/01/artifi...olution-2.html

I have not read them in full but have done a full perusal of both.

I'm in the ASI will never happen category, or at least not have limitless capabilities. This is deemed as an unpopular choice by people in the field in the second article. Much like people cannot escape the confines of their mind or the universe, I do not think a computer can escape the confines of the logic that it was given, and that logic itself cannot lead to limitless knowledge. Basically, I think it will still run into the sides of a box that it cannot leave.

I might also argue that humanity and biological life as we know it IS the current universe's ASI since time is relative, and relative to the universe as we know it, humanity's intellect has developed extraordinarily fast, which just makes me think of the idea of infinite creation, a mirror mirroring itself ad infinitum.

Soundwave- 02-22-2017 09:49 PM

Re: The singularity, ASI, ie: crazy advanced AI
 
Quote:

Originally Posted by Cavernio (Post 4524463)
Much like people cannot escape the confines of their mind or the universe, I do not think a computer can escape the confines of the logic that it was given, and that logic itself cannot lead to limitless knowledge.

I'm not going to read too much into the reasoning here but computers and the mind have the same overall limitation and neither has gotten anywhere close to reaching it.

Reach 02-23-2017 08:58 AM

Re: The singularity, ASI, ie: crazy advanced AI
 
I agree with your sentiments here. Our mind is our body, and our cognitive processes are fundamentally limited by our neurological architecture.

An AI will always be limited by its architecture as well. I don't doubt that AI could eventually become far more intelligent than we are, but the idea of an essentially unlimited intelligence seems impossible.

I suppose if we want to theorycraft here, the only way you could achieve an essentially unlimited artificial intelligence would be if the AI could expand its own architecture indefinitely. While it seems possible at face value for a computer to continue to build on itself if it were intelligent enough to do so, you face fairly simple restrictions on what is possible via resources required to build the materials the AI is composed of, as well as power sources required to run the AI and the ever increasing demands on construction and maintenance of the super AI.

The resource demands of such a super AI could quickly become absurd and unsustainable, without the ability to extract more resources from the far reaches of space, which may or may not be viable. I'm imagining the scenario where an ASI continues to expand until it engulfs an entire planet, an interconnected network with hordes of worker AI continually maintaining and constructing additions to the ASI. Eventually the ASI would have to reach an equilibrium where it simply can't expand itself anymore based on the confines of the system it has built itself in. Exactly how intelligent that AI would be is anyones guess, but it would still have fundamental limitations.

I'll stand on the pessimist end of things as well and say it won't happen any time soon. AI advancements will probably creep in continual increments with most of them having practical applications that will help human life, but I don't think we're going to see a zero to hero ASI any time soon.

One reason I think people erroneously overestimate the potential of AI is related to how we've traditionally seen Moore's law, but computational efficiency can only continue to double for so long. In the coming decade or two we're going to see efficiency plateau, as the maximal efficiency of computer parts is limited by the laws of physics. At some point, systems will have to become larger and not simply just more efficient. This will place practical restrictions on ASI.

MinaciousGrace 02-23-2017 10:30 AM

Re: The singularity, ASI, ie: crazy advanced AI
 
humans are too stupid to make something smaRTER than ourselves

qed

Dynam0 02-23-2017 11:09 AM

Re: The singularity, ASI, ie: crazy advanced AI
 
Quote:

Originally Posted by Reach (Post 4524894)
One reason I think people erroneously overestimate the potential of AI is related to how we've traditionally seen Moore's law, but computational efficiency can only continue to double for so long. In the coming decade or two we're going to see efficiency plateau, as the maximal efficiency of computer parts is limited by the laws of physics. At some point, systems will have to become larger and not simply just more efficient. This will place practical restrictions on ASI.

Haven't heard of quantum computing I see ;)

Reach 02-23-2017 11:53 AM

Re: The singularity, ASI, ie: crazy advanced AI
 
Quote:

Originally Posted by Dynam0 (Post 4524923)
Haven't heard of quantum computing I see ;)

Which can certainly make some forms of computation more effective, but you wouldn't want it for everything. Traditional computation isn't going anywhere. I'm sure we'll find other ways of computing in the future as well.

But we don't know anything about resource requirements for quantum computing at this point since we can't even build a functional one beyond maybe a few qubits, but it still has to obey the laws of physics and will have architecture constraints.

An AI system will always be constrained by energy requirements and our ability to use energy will plateau out. In terms of microchips we're already there. Processors can't really go any faster. Quantum computing will squeeze more out of computation but it'll also only go so far and is constrained in its applications.

What it will come down to is how far above (or below..!) human cognition our AI systems have the energy available for and are able to use...before we plateau.

I prefer to take the pessimistic stance and would argue that even if we eventually have the ability to produce something that would turn itself into a ASI, it'll never happen because of practical reasons (economic or energy related).

DaBackpack 02-23-2017 04:32 PM

Re: The singularity, ASI, ie: crazy advanced AI
 
I don't think it will happen.

Quote:

Originally Posted by Reach (Post 4524894)

The resource demands of such a super AI could quickly become absurd and unsustainable, without the ability to extract more resources from the far reaches of space, which may or may not be viable. I'm imagining the scenario where an ASI continues to expand until it engulfs an entire planet, an interconnected network with hordes of worker AI continually maintaining and constructing additions to the ASI. Eventually the ASI would have to reach an equilibrium where it simply can't expand itself anymore based on the confines of the system it has built itself in. Exactly how intelligent that AI would be is anyones guess, but it would still have fundamental limitations.
.

I'm a PhD student in AI and the above is one of the more cited arguments -- we'd sooner run out of electricity before such a thing can actually exist. (You could argue that the ASI would first derive a new, "infinite" energy source, but we have no realistic reason to believe that this will happen besides hypotheticals.)

The much more fundamental answer, though, is that we have absolutely zero examples of "good" general-purpose AIs in existence. We build them to solve very specific problems. Most AI systems are pretty stupid and rely on relatively simple models (compared to the complexity of the human world, that is) and operate on specific assumptions and axioms.

You can argue that "why don't we just add these different AIs together and get a super AI that's good at everything?" and the answer is, it's not really feasible at the moment. AIs are computer programs. You can't procedurally concatenate all computer programs in the world together to get a super-program. Even if you COULD, the amount of processing time required to run such a program would be prohibitively large -- perhaps longer than you could actually sustain the agent for. (Quantum computing might address this, but do remember that we have to rebuild all of our algorithms for this new architecture, since the traditional binary encoding of knowledge is incompatible with qubit-centered computing).

((EDIT: Also, research on meta-reasoning (more or less the ability of agents to know what kind of reasoning to use for a given input) isn't there yet, either. If you give a robot a donut, it has to decide what to do with it. It can paint a picture of it, throw it, eat it, etc. Again, a painting robot will know what to do, because it only knows how to do one thing. A general-intelligence agent has to decide what to do in certain situations, given X Y and Z skills it possesses.

))


The program I'm building now takes 3~ish days to read and synthesize a corpus of 1000 stories and still is pretty garbage at telling unique stories of its own. (And, of course, it can't do anything else.)

So, maybe ask this question in 50 years, when something about computing fundamentally changes. Just my 2 cents

EDIT 2: I actually skimmed the articles, and they did a pretty good job of describing the limitations I mentioned above. I don't think the solutions provided are that accurate, though.

Reach 02-26-2017 08:21 AM

Re: The singularity, ASI, ie: crazy advanced AI
 
Agreed with the above.

On the energy issue, we certainly know from physics already that an infinite energy source can't exist. Our best shot is probably the construction of efficient fusion reactors, since at least we know fusion is possible...but short of collapsing a nebula, it's still going to require massive investment and resources to build a reactor that could contain such a reaction. The ability to sustain a reaction here on earth is going to be limited, and I think we'd much sooner use it to fix our own energy problems than to power an AI.

On the other topic, how realistic is the idea of building an AI that can come up with novel ideas on its own?

Have there been any major breakthroughs in general-purpose AI? I've been following various AI scenes for awhile now (chess, poker), and while everyone is excited every time an AI is able to beat a human at these tasks (humans were recently defeated in poker, !), the methods they're using aren't much different than they were 15 years ago. The search and learning algorithms are much more efficient now and take advantage of massive improvements in hardware, but ultimately the computers are still just...computing. Most of these wins came from advancements in learning from simulation.

The CPUs that beat humans in poker still can't 'bluff', the AI just ran different lines in the same situation a million different times and calculated that betting is more profitable than checking even though it has a weak hand...therefore it bets since it's programmed to take the most profitable line.

I see these as human achievements at this point, where we're doing all of the thinking and simply need these tools to crunch numbers we can't crunch ourselves. This is a far cry from a computer being able to produce a thought of its own. I guess from a theoretical perspective, would that not require the AI to be able to write its own code?

Reincarnate 02-26-2017 01:56 PM

Re: The singularity, ASI, ie: crazy advanced AI
 
When we currently code AI programs, we do so with some goal in mind.

For instance, we can make a rock-paper-scissors bot that uses some basic Markov crap to track how often it wins compared to your strategy over time, adjusting its moves based on conditional response. The more you play, the more it "learns" and the better it gets.

But in this case, it's not really "learning" in the way a human might -- it's just following the rules we've given it and using past data to better inform its decision-making to maximize its chances. But we're telling it exactly how to use that data and how to make decisions. This kind of AI doesn't have the framework required for doing anything else. And really, most AI programs (even the really effective ones) operate on the same simple principles.

It's worth noting that none of these programs are doing anything that humans can't already do themselves, because we're the ones defining the rules! We could follow the same rules if we just had the same memory ability and speed (in practice, we'd just need lots of paper and time :P ).

We don't have any good examples of AI that "learns new things" and "expands upon itself" because the instructions for this would be immensely complicated. And we're biased in that when we think of a general purpose, hyper-intelligent AI, we think of something that's like a really, really, really smart human. But there are issues with this.

If we think of our own brains as a sort of program, it'd be a tough one to replicate: It's been shaped by millions and millions of years of evolution. When we see a donut in our hands (to use the example from an earlier post), what we do with it depends on a massive jumble of variables and processes. Are we hungry? Are our current priorities such that we care about things like diet? Do we have past experience with this donut -- do we already know how it'll taste and how we'll feel about it? What about thinking ahead -- would we prefer to save our appetite for something else? Maybe we don't have a napkin, and we care sufficiently in this context to not make a mess. Maybe we feel bored and there's nothing better to do. Maybe we'd rather throw it instead.

And so on and so on. And all these decisions are further influenced by the decisions we've made up to this point and the state of the world around us. Things like "want" and "need" in any given situation have been baked in through evolution, too. We are like programs whose objective/fitness functions have been determined by natural selection.

It's going to be hard to stuff all that into a computer program. I don't think we're going to see any kind of "general purpose AI" until we have the computing power to replicate a human brain, let alone understand and modify it. Even that is a huge challenge. I think right now our best efforts have been able to replicate a sandgrain-sized chunk of a rat brain, and this is a far cry from being able to replicate an entire human brain. And even if we do replicate it, it's not clear if we'd be able to understand it well enough to modify it since it's just so complex.

On the other hand, we could say "screw it" and just try to remove the complexity by using heuristics and assumptions to mimic xx% of the functionality, focus on which decision-making processes we care about, and then try to put it all together. But that's still speculative and I don't know if we can remove the complexity and yet still have it be able to "learn" effectively. Because the human learning process is not some fixed thing: People draw inspiration and new inputs from a variety of sources. I think if we reduce the complexity too much, we also reduce the strength of the process and output.

And as mentioned earlier, can we even run the thing? Once we get to the point where we are basically mapping matter to a large computer, the energy requirements would be so large that we may as well just scrap the thing and focus on genetic engineering to eliminate the middleman, since we already have the hardware (i.e. ourselves) for producing new brains with much lower energy requirements without the need for external mapping -- but then we start getting into big ethical concerns.

When it comes to computer-based intelligence, I don't think we're going to make it too far past simple, small-scale, fixed-goal programs. I am hugely pessimistic about a "general purpose AI," but I'm a lot more confident in our ability to genetically modify things to accomplish similar goals.

Reach 02-26-2017 09:28 PM

Re: The singularity, ASI, ie: crazy advanced AI
 
You bring up an interesting point and dilemma. While strong general purpose AI might elude us for the entirety of the 21st century, we already have good human models for intelligence and genetics research is exploding.

If there is one thing I can see happening in the not so distant future, it would be human gene modification, and while the ethics debates will surely never end on that subject...once it's possible there are probably going to be people doing it. Making something illegal has never in history prevented it from happening...so someone, somewhere will start doing it.

...and if you created a human superintelligence, couldn't it have all of the benefits of an SAI in that it could solve problems that have otherwise eluded mankind, without any of the dangers or resource concerns of an SAI?


All times are GMT -5. The time now is 03:27 AM.

Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright FlashFlashRevolution