-
Posts
131 -
Joined
Everything posted by Kikker
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
Backpropagation introduces the concept of self-correction instead of environmental correction. It is contradictory to genetic algorithms, something can't be a genetic algorithm and a neural network at the same time. Unless you have an population of self-correcting routines encapsulated in a genetic algorithm. You misunderstand. Fastest subroutine to finish doesn't equal subroutine which actually does something. You only explain how a already working subroutine gets faster not how you get a working subroutine in the first place or how it rules out unnecessary actions. If your estimate is so rough that it can differ by dozens of magnitude, you should just argue that the actual number is unknown. I assumed you where comparing neurons to transistors from here: Or do I need to explain that each layer of transistors could half the switches per second of the whole thing? Your a good sport you know, first slander my arguments and then demand I don't refute you.- 26 replies
-
- 1
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
I don't understand how you see this process as a genetic algorithm... There is no combination of genetic traits between subroutines to make new ones, no killing of unfit subroutines and you don't even try to explain how random mutations come into being and how the fastest subroutine is also evaluated as the best one to do a particular thing. Before trying to fit a square into a round hole, please just read up on neural networks and backpropegation which more closely resembles your own description on how the brain works. I don't understand where yo get your 400 billion from, even if it's possible, observations I googled (I don't have the time to spend a workweek understanding precisely what scientist agree on) all come down to 20-1000 switches per second average across the brain (link your article if you don't agree). That's not even near 400 billion. More importantly a transistor can't hold even remotely the amount of information to represent a neuron. You would need at least a transistor for the neuron itself and one for every connection it has (average of a 1000) to other neurons, which is still wildly inaccurate. Your estimate needs some serious adjustment.- 26 replies
-
- 1
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
I'll put the other arguments on hold for a post or two. Of all the things you don't explain you choose this core idea you hold... What is this base level you're talking about? How are they almost identical? What exactly is the theory of how the brain works?- 26 replies
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
I admit the phrasing was sloppy. But if you keep insisting I meant X = X and X = X+1 then you're assuming I'm an idiot beforehand. Just to be clear what I meant was: you could have a goalstate(x) but you could also have a goalstate(y) which has a subgoal(x). Since you generally don't know subgoal(x) it's potentially dangerous. I agree that the brain structure is essentially* encoded in DNA. But if you learn about physics your DNA doesn't change after every bit of knowledge you gain. Your structure has enabled you to learn it. So AI researchers don't have to recreate the process that produced a human structure after 4 billion years, you could just reverse-engineer the governing principles of the current brain structure in order to get a human level AI. * It's mathematically impossible to encode every neuron in dna so it's probably a certain growing pattern which has different outcomes if environmental factors change, most importantly during pregnancy. I didn't say that, to adjust your analogies: "A rock being heavier than a bullet doesn't imply bullet level dangerousness." "A car having more mass that the H-bom doesn't imply it having H-bom level deadliness." The very fact that you had to change the sentence structure completely to disprove an analogy should ring a bell in your head. I'm not sure what your syllogism clarifies as it can be used to argue any property of a human to be the same for an Ai. I assume you don't want to argue that. Humans want to take over the world Humans have legs AI wants to take over the world. Therefore it is implied AI have legs. Human legs are strictly related to human biology. Human biology implies desires among other things. Therefore it is implied that an AI that wants to take over the world has legs. To be specific the condition that the desire to take over the world requires human intelligence isn't believable. You can make it observe a (simulated) situation which resembles a world taken over and turn it into a goal state. And like I said before a world taken over could be a hidden sub-goal of a goal-state which we had programmed in. And no I do not know any action of myself that doesn't have a evolutionary foundation behind it. I'm not sure what an evolutionary biological motive is if it's different from an evolutionary foundation..- 26 replies
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
The latest breakthroughs (convolutional neural networks, AlphaGo, netflix algorithm) don't make use of a genetic algorithm. There even have been serious doubts about the efficiency of genetic algorithms in it's original form. I don't know what you think they figured out in those papers but genetic algorithm are generally used to generate connection structures while backpropagation is used to optimize the parameters. To analogue to humans, most of our intelligence isn't encoded in our DNA, rather we learn it throughout our lives through an progress we haven't figured out. You don't have to recreate the genetic process in order to recreate that learning ability. I don't know what semantic game you're playing but anything can be a goal state and anything can be a sub-goal state. If that wasn't clear by me using world domination as both a goals state and a sub-goals state I apologize. World domination could be the easiest path to accomplice the goal "continue humanities existence" for example. Your a and b difference is a bit weird since the difference isn't very clear. The structure in which a desire can develop is most likely programmed otherwise the structure is written by a program which is in turn programmed (etc.). I don't understand how both things are mutually exclusive. Here is an explanation how an cooperative ai might be developed. When talking about human level intelligence I assumed it was same learning ability and same skill ability. That doesn't imply same moral system or same desires. To put it differently: a bears ability to defeat a human in close combat doesn't imply human level intellect regarding close combat. An Ai's ability to take over the world doesn't imply human level intellect regarding anything except it's ability to take over the world. Wrong post?- 26 replies
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
Artificial vs Natural Intelligence
Kikker replied to ProhibitionKilz's topic in Science & Technology
(paraphrasing an example from the Master algorithm) Imagine you're trying to cure cancer, you would need to have a way to interrupt the process of cancer cells while leaving the other cells unharmed. In order to do that you need an accurate model of the cells workings, precise enough to identify weak-points unique to cancer cells. Also cancer cells have many different mutations which causes them meaning you'll need to be able to individually identify the cancer within each patient. In theory you could accomplish this by creating a Markov model (let's just say it's a spinoff from Bayesian models) which explores each possible state and calculates the probabilities of different state transitions. You of-course need to manually note down each state or specify observations in a way that states could be inferred. You'll also need an algorithm to manipulate the states, add possible logical statements (hypotheses) and then the Markov model will check if they're likely or not. You then have an algorithm which can model the workings of that cell. In the real world though we can't even see all the states and workings of a cell to begin with making the state plane incomplete. A Markov model can't handle that and you'll need a way to reduce the noise (or a better observer). Something doesn't need the desire to take over the world in order to actually take over the world. A goal state could have a requirement to take over the world. Besides, most goal states (such as taking over to world) are too complex to formulate manually. You'll need to let the AI observe a goal state, infer the conditions behind it and then let it do it's thing. Furthermore the main point is not that sky-net happens but more that once an AI exists which can outsmart most of mankind we wouldn't be able to stop it once it set out to do whatever we wanted it to do. A faulty goal state could then cause serious damage. Which papers?- 26 replies
-
- ai
- programming
-
(and 3 more)
Tagged with:
-
This article is probably fake since the event itself is untraceable.
-
It's always a bit perplexing when a person believes that determinism would switch around cause and effect, believing that the effect would become independent of the events preceding it.
-
Why is human life worth more than animal life?
Kikker replied to richardbaxter's topic in Philosophy
Another way: should we kill ( or let be killed) a human being with an iq of 100 to save a human being with an iq of 101, or 500 puppies? -
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
What do you mean with no functional impact on the system? If has no impact at all it wouldn't be observable, so how do you define functional? -
I found a video which illustrates my point of view better. Jordan Peterson - IQ and The Job Market
-
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
Can you please explain your basis for this? The scientific method requires empirical measurement and such measurement has no access to mental properties (it therefore cannot use them per say). Measurement only has access to physical properties (eg the state of a particle, neuron, etc). Under physicalism such physical properties are assumed to correspond to mental properties - however one would struggle to find a neuroscientist (speculating about philosophy) who adopts reductive physicalism (a 1 to 1 correspondence between mental and physical properties) based on how information is distributed across neural networks. Most physicalists uphold non-reductive physicalism; specifically the thesis of supervenience (that there cannot be a change in a substance's mental properties without a corresponding change in its physical properties), or attempt some form of eliminativism. Under the peculiar form of the Copenhagen Interpretation discussed, measurement requires mental properties but it does not use them per say (it still has no access to them). You quote just half a sentence, which was a or statement. I'm not sure what I need to explain since you don't address the second part of the sentence. It seems that in these two posts a argument of mine got refuted by you without me knowing or you misunderstood it, it was about a house being unmeasurable. I was under the assumption that mental properties correspond to brain functionality or (if you refuse that connection) at least mental ability. Meaning everything is measured in mental properties. To take your example of counting the number of specific objects moving across a specific region in space-time; counting is a mental property. The specific object has a definition to differentiate between the specific object and any other thing, object recognition is a mental property. Movement is also only recognized through mental properties. All empirical measurements depend on mental properties to measure it in. Even an action like writing your observations down on paper needs mental properties to first store those observations when observing and then mental properties to retrieve that information to write it down. Mental properties aren't limited to subjective observations, objective observations are also depended on mental properties. If mental properties aren't brain functionalities what are they? and how do you differentiate between a mental property and brain functionality? You seem to misunderstand hypotheses, they are assumptions with reasonable probability (because of previous observation) which need to be tested in an experiment. So if your hypothesis is that self-reported experiences about the color someone is seeing correspond with a distinct neurological patterns you can test that. Maybe take multiple test subjects let them self-report the colors they're seeing and measure the neurological effect then make an estimate if any correspondence you see is improbable enough with a no-correspondence (null case) assumption to make your hypothesis more probable. -
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
Yes, I read through the study and the role of the parietal cortex doesn't seem to be very well understood except for the literal self-image we have, since we already know that both seeing and imagining something requires the parietal cortex. I worked under the assumption of mental properties to show how his proposed view contradicts itself, without the need for counter-evidence from the real world on how evolution works or how we can to some extend measure brain functions, even simulate them (blue brain project can simulate parts of a rat brain). I find there is little reason to believe that it is impossible to understand our brains inner workings. Our inability to simulate a human brain for example is more of a computational challenge than a inability to unravel the workings of our neurons. The point wasn't that you can't know whether any of the mental properties contain information about the real world. It was that you never know what is and what isn't from the real world except by probability estimates. -
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
Can you point to a study on that part? Because the parietal lobe is mainly about spacial awareness, object manipulation, object recognition and language manipulation. The self doesn't really fit into those categories as far as I'm aware. If a non-sentient machine is able to use the scientific method then it uses mental properties which are categorically immeasurable so we don't know whether it's conclusions are scientific or not, or it uses a measurable method which has the same results as mental properties which heavily suggests that mental properties aren't immeasurable. The point I'm trying to make is that if mental properties are immeasurable and all things are understood through mental properties we would always need to assume that mental properties definitely contain information about the real world without ever knowing for sure. This leads to a situation where every observation someone makes can be both empirical as well as not-empirical and can't ever be certain. Empirical evidence is thus redefined as a probability (instead of an absolute truth) with the assumption that the world we observe is real. If mental properties can be observed it is accessible for empirical method since the empirical method only needs to estimate it's working based on observations. If mental properties are categorically inaccessible to the empirical method even though the empirical method is based in estimates then any observation we make has an equal chance of being true as well as false meaning it's impossible to measure for example a house. -
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
If all mental properties are all internal experiences i'm confused on how empirical observations could possibly exist. To elaborate, our internal experience includes far more than sense or smell. Our sight for example is not only the light that enters our eyes, it includes perceptional knowledge on how to estimate distance and includes a memory capable of storing all relevant objects (lamp, table, tree) and recalling them in real-time. A chair doesn't actually exist in the physical world, we sense something and categorize that thing as a chair if it has certain attributes. How can you possibly measure the hight of a house when the difference between house and air is a mental property, the meter/feet you measure it in is a mental property and your sense of the instrument you're measuring with is a mental property? -
The problem of evolutionarily irrelevant strong emergence
Kikker replied to richardbaxter's topic in Philosophy
At this point i'm quite curious, what exactly are mental properties and why can't they per definition be observed? -
It assumes that automation will happen too quickly for the workforce to adept resulting in extremely high unemployment rates which the current system wouldn't be able to handle. Why would government subsidize businesses who are automating? Or do you actually mean government subsidizing businesses to prevent them from automating? In which areas would those new interests occur?
-
Don't you think you're beating a dead horse at this point? since even extreme leftist organizations like theyoungturks don't think "religion is the cause of all wars" or even in it's milder form: "religion causes significantly more war than atheism". And shouldn't you attack first the notion that all religion can be generalized under one banner against atheism or non-believers? Since there are vast differences between religions and the effects they have on behavior and/or policy.
-
It had already been posted 2.5 years ago but I will link it again since not everyone seems to realize how far automatization actually goes. automatization goes farther than replacing low skilled jobs, it will greatly decrease the demand for highly skilled jobs by for example making one person with ai assistance capable of doing the work of 3 persons without ai assistance. Jobs like designing cars will change drastically as as evolutionary algorithms take over design functions. They can already design the frames to be as light and strong as possible to a point no human could have designed them. You're not designing anymore, just defining parameters and picking the design you like. One fear I personally have is that automated systems become so valuable that the means to make those systems are enough to keep people in power, aka russian economics or even Congo situations in which it isn't in the best interest of the government to increase productivity of it's citizens and instead expends it's resources to keep controlling key resources or automated systems.
-
Even now you're arguing that something can't be "aware" because it is an extension of human choice. Double think right there.
-
I am still curious about a few things. Isn't it enough for the national debt to stay the same percentage of the GDP? Why doesn't inflation skyrocket when you're increasing debt relative to GDP? Why would reducing national debt not simply result in deflation of the currency? Or is deflation exactly the problem what would result in less/negative growth? Am I missing something?
-
.... Your argumentation style is incorrect. Even now you're arguing that something can't be "aware" because it is an extension of human choice. Do really you need an explanation? Having children is an extension of human choice, and even if god made humans, then everything is made by god including humans. You can't have humans be aware and not aware at the same time. Also if you argue that it only applies to objects why not say that all objects can't be aware? The fact that a human made it is irrelevant. And your previous argument was the same, under determinism a closed system ---> a system without any outside interference <--- would be entirely predictable if all elements are accounted for. You thought of a closed system, predicted it and then interfered with it which means it isn't a closed system. You can't have a closed and a not closed system at the same time. So I said either extend the closed system or really make the molecules a closed system by excluding your own interference. What? This is your experiment:
-
Determinism doesn't make it impossible for beings to observe causal relations and act accordingly. If determinism had such an attribute then an frog intercepting a fly would already disprove such an attribute, without the need to include free will or choice. In your "thought experiment" it is irrelevant whether you predicted X or not. You cannot influence X if it is isolated from you, if it isn't isolated from you then to accurately predict the future you also need to predict yourself. It isn't the only explanation. You tried to predict a closed system with 2 important elements: you and those molecules. But you only predicted the molecules. You're leaving out an element thus your prediction isn't necessarily true (or false) in a deterministic world. In your argumentation style a missile intercepting system has become "aware of the determined reality itself" since it can predict the course of a missile and prevent that course from happening, but it could also fail leading to an "infinite loop" which switches between an intercepted missile and the original course for future point X.
-
You should read your sources once in a while....
-
[Central] - Climate Change / Global Warming topic
Kikker replied to Torero's topic in Science & Technology
Thank you, no excuses from me.- 16 replies
-
- Climate
- Global Warming
- (and 6 more)