View Single Post
Old 02-26-2017, 01:56 PM   #9
Reincarnate
x'); DROP TABLE FFR;--
Retired StaffFFR Veteran
 
Reincarnate's Avatar
 
Join Date: Nov 2010
Posts: 6,332
Default Re: The singularity, ASI, ie: crazy advanced AI

When we currently code AI programs, we do so with some goal in mind.

For instance, we can make a rock-paper-scissors bot that uses some basic Markov crap to track how often it wins compared to your strategy over time, adjusting its moves based on conditional response. The more you play, the more it "learns" and the better it gets.

But in this case, it's not really "learning" in the way a human might -- it's just following the rules we've given it and using past data to better inform its decision-making to maximize its chances. But we're telling it exactly how to use that data and how to make decisions. This kind of AI doesn't have the framework required for doing anything else. And really, most AI programs (even the really effective ones) operate on the same simple principles.

It's worth noting that none of these programs are doing anything that humans can't already do themselves, because we're the ones defining the rules! We could follow the same rules if we just had the same memory ability and speed (in practice, we'd just need lots of paper and time :P ).

We don't have any good examples of AI that "learns new things" and "expands upon itself" because the instructions for this would be immensely complicated. And we're biased in that when we think of a general purpose, hyper-intelligent AI, we think of something that's like a really, really, really smart human. But there are issues with this.

If we think of our own brains as a sort of program, it'd be a tough one to replicate: It's been shaped by millions and millions of years of evolution. When we see a donut in our hands (to use the example from an earlier post), what we do with it depends on a massive jumble of variables and processes. Are we hungry? Are our current priorities such that we care about things like diet? Do we have past experience with this donut -- do we already know how it'll taste and how we'll feel about it? What about thinking ahead -- would we prefer to save our appetite for something else? Maybe we don't have a napkin, and we care sufficiently in this context to not make a mess. Maybe we feel bored and there's nothing better to do. Maybe we'd rather throw it instead.

And so on and so on. And all these decisions are further influenced by the decisions we've made up to this point and the state of the world around us. Things like "want" and "need" in any given situation have been baked in through evolution, too. We are like programs whose objective/fitness functions have been determined by natural selection.

It's going to be hard to stuff all that into a computer program. I don't think we're going to see any kind of "general purpose AI" until we have the computing power to replicate a human brain, let alone understand and modify it. Even that is a huge challenge. I think right now our best efforts have been able to replicate a sandgrain-sized chunk of a rat brain, and this is a far cry from being able to replicate an entire human brain. And even if we do replicate it, it's not clear if we'd be able to understand it well enough to modify it since it's just so complex.

On the other hand, we could say "screw it" and just try to remove the complexity by using heuristics and assumptions to mimic xx% of the functionality, focus on which decision-making processes we care about, and then try to put it all together. But that's still speculative and I don't know if we can remove the complexity and yet still have it be able to "learn" effectively. Because the human learning process is not some fixed thing: People draw inspiration and new inputs from a variety of sources. I think if we reduce the complexity too much, we also reduce the strength of the process and output.

And as mentioned earlier, can we even run the thing? Once we get to the point where we are basically mapping matter to a large computer, the energy requirements would be so large that we may as well just scrap the thing and focus on genetic engineering to eliminate the middleman, since we already have the hardware (i.e. ourselves) for producing new brains with much lower energy requirements without the need for external mapping -- but then we start getting into big ethical concerns.

When it comes to computer-based intelligence, I don't think we're going to make it too far past simple, small-scale, fixed-goal programs. I am hugely pessimistic about a "general purpose AI," but I'm a lot more confident in our ability to genetically modify things to accomplish similar goals.

Last edited by Reincarnate; 02-26-2017 at 02:16 PM..
Reincarnate is offline   Reply With Quote