Jump to content

Kikker

Member
  • Posts

    131
  • Joined

Recent Profile Visitors

617 profile views

Kikker's Achievements

Newbie

Newbie (1/14)

-7

Reputation

  1. But you interchange the two terms. For example you responded MysterionMuffles with: " How does being objectively closer to the truth and wiser not make us superior to the "common populace" ". So I said : " MysterionMuffles talks about intellectual capabilities, not wisdom. ". That you didn't see my response as an objection to your assertion is strange, apparently you don't even feel the need to elaborate on the relationship between intellectual capability and wisdom. It's easier to respect people who do things you could do but you don't want to while finding it useful. However, what I was talking about were people in the same profession. How people who do the exact same job at a slower pace and lower quality can have surprising insights you would never think of otherwise. But maybe that's unique to the problem-solving/ programming I'm involved in. So the overhanging theme seems that most tribes don't update their stances enough while your opinions change. But what overhanging framework do you have to balance out that with the "I believe what the last reasonable person told me" phenomenon? And is the term useless intellectual you use to describe yourself heartfelt? In other words are you actually stuck being one or does that statement not express the full feelings you have about your profession?
  2. No, like I said entropy can be reduced both by untruthful and truthful information. If the receiver already believes the information, a decrease in uncertainty takes place and the reverse is also true. Besides, information isn't destroyed when uncertainty rises, you should know that.
  3. You said: I said no it's not (destruction of information) because information can be truthful or not (correct or incorrect). So it can be protected by free speech. Then, in your latest response, you say no to something but it's actually unclear what you're saying no to since my statement directly counters your argument about fake news. Likewise, the nuance you brought didn't do anything to change that.
  4. The definition I used was more akin to the one used in information theory where the content of the message doesn't matter. Besides, reduction of uncertainty (or entropy, which has a formal mathematical definition) works both ways, truthful and untruthful. Since your whole argument was based on: information is only used to make "correct" choices, I think I'll wait on your reformulation.
  5. Well, this is not a chicken or the egg problem, there is no argument on which came first, conscious thought or conscious speech. It's painfully obvious that you would need to be capable of certain thoughts first before being able to express hem. Nevertheless, the argument is that speech and thought are part of the same cycle your thoughts are heavily fueled by the speech (or put more broadly communication) produced by others. Jordan Peterson seems to think that speech entails information-flow, meaning that everything that you have heard, seen or read produced by another human is speech, and to restrict that is restricting the information-flow itself. I probably don't have to tell you that manipulation of information like history can have dramatic effects on the perspective (and thus thoughts) people can have. Even more so, in the current day and age a person could be considered dysfunctional when isolated from birth, maybe even less capable than a socialized chimpanzee. But Jordan argues something more fundamental, it would be beneficial (even necessary) for the individual to share honest (not necessarily truthful) believes freely. He makes this personal by giving the example of people streamlining their thought process by talking to other people about those thoughts. However, the larger picture would be from the perspective of a multi-agent system in which nodes (humans) share their current believes leading to consensuses. If that information sharing is disrupted, for example people share more information that they don't believe because it's necessary by law, then not the total knowledge of the system will be affected but the ability for individuals to derive conclusions from things they do know will.
  6. MysterionMuffles talks about intellectual capabilities, not wisdom. At the same time high intellectual capability (and believing you have it) could blind you from good ideas other people might have. Smart people might think that any ideas they come up with are better than any ideas their less smart peers have simply because they are more intellectually capable overall. In other words an overestimation of their superiority. A tribe can perfectly exist of smart people only....
  7. Who is "we"? And what is the relationship between "we" and "someone"?
  8. It's cheaper, just not as cheap as it sounds. I tried to answer this question some time ago, but then went to watch those movies to see what you're on about. The thing is, I didn't return to this forum afterwards. () So first thing to keep in mind is that neural nets are heavily inspired by the brain, so it's already designed to emulate a "known" inertia system, the one we operate on. Though several complexities of human brains aren't simulated and neural nets don't have nearly as many neurons as a human (even the big ones like in the video). Given that, those mini-abstractions I talked about is simply a way to describe what happens in a neural net. With images you can to a certain extend trace back what every layer gets in and get a nice little picture of it like below. But what actually happens isn't restricted by the way people try to conceptualize the inner workings of a net. Simply put the abstractions I mentioned are meant for you (and me), to get some idea. An actual neural network can manipulate data in all kinds of ways which don't conform to the way I described there. Also, when I talked about the accuracy of those abstractions I actually just meant their usefulness in doing what the neural net is supposed to do. I assume that some kind of abstractions take place and that their usefulness determines the outcome. Edges to features to faces.
  9. I don't know if you got stuck on saying that "speech is necessary for thought" (or speech is a way of thinking) is the same as saying "speech is the same as thought" or that you fundamentally don't agree on the relationship between thought and communication described by Jordan?
  10. It's a lot of time spend keeping up to date with progress in artificial intelligence if it's not your field. The only thing I'm wary (and annoyed by) is people taking offense. For example the debacle that played out here. Well that requires significant pre-processing. aka an algorithm that cuts the relevant pieces out of a larger image (how can you determine that something is relevant?) and then use humans for the final training. Which is probably based on consensus. So you would need a trained network trained in finding data to be labeled, which would be a far simpler network nonetheless. Ah I didn't mention that . Consider a neural net of 4 layers, each layer has 4000 neurons. Then each neuron has at least 4000 connections and the middle layers (2,3) have 8000. So yes a neural net can have neurons with thousands of connections. But more importantly neurons involved in the early stages of human vision have significantly less connections and have a certain connection structure, that structure was translated to neurons in a neural net. That was the inspired part. I don't understand how my statement suggested something about unknown parameters. I simply mentioned that more pixels (better resolution) costs more computing power. Mmm... So I consider abstractions at a much lower level than you refer to here. A abstraction from a sharp contrast in an image to a edge (of something) is for me already an abstractions, pile a bunch of those mini-abstractions together (edges to corners for example) and you can get something useful (like a square).
  11. Since there seems that nobody is going to answer this I will attempt an alternative answer. Firstly I'm not an expert. Secondly I assume you already saw a bunch of vaccine specific information sources so I would recommend this video to get a sense of how your immune system works. Don't skip the boring parts or rewatch if it's to fast. It is fundamentally important to get a sense of how your immune system works in the first place to reliably judge information as probably or utter nonsense (which is the most common distinction a non-expert has to make ). I say this because your assertion about chromosomes being altered by an injection of (presumably) cells and your estimation of that article being probable makes me believe you don't understand something really fundamental about your body, before taking vaccines into consideration. If you find me patronizing then you shouldn't put "Please don't laugh" in your post, suggesting that your arguments are normally met with ridicule.
  12. That title is incredibly misleading, the creature discovered was in fact what we consider an ape. It still walked on four feet and has, presuming it's in fact our ancestor, multiple ape and humanoid branches in the more recent past. The most important thing is that our species (homo sapiens) originated form Africa 7 millions years later (200.000 years ago). Which makes Africa still the "birthplace" of mankind, unchanged by these findings.
  13. Yes and no. What you're describing isn't the state of the art anymore, since 2011. You can with relative ease make a neural network to recognize a chair. There are two main breakthroughs since we were able to do that. Firstly a chair is a single true or false question, so it's still manageable to make a traditional (fully connected) neural network and use it to recognize just chairs. If you would like to see other objects in the images you either have to make more networks or expand the one currently recognizing chairs to also recognize other objects. The latter increases training time dramatically. Secondly you would need a data-set of manually labeled pictures with chairs in it, not just a few hundred, you need tens of thousands of manually labeled pictures to get good result. These two problems are solved with the introduction of convolutional networks (inspired by the actual neurons humans use) and adversarial networks. Convolutional networks significantly reduce the computing cost and increase effectiveness of the initial stage of the network. In this initial stage the image is compressed to a standard input and low-level features like edges are extracted. Another property is that the filters used by convolutional networks are local (a specific part of the image). In the an adversarial network we actually make sure that some abstraction takes place and since it's just two networks figuring out those abstractions we don't need to label the data anymore. Also note that earlier in a traditional neural network we don't care what abstractions are made by the algorithm. We just need the correct output (also we don't know what abstractions are made, if any). Now however we can directly see the accuracy of those abstractions in the generated image. Without the convolutional layers of that network the generated image would also be very blurry (higher resolution would be computational unfeasible), but with convolutional layers a crisp image can be generated. In the video above every object has some kind of abstraction and some kind of transformation to the other weather conditions. So though you don't strictly need this kind of abstraction when simply recognizing a chair, it's definitely needed in the case above. --- Looking back, it's for me a given that these kinds of networks need to make some kind of abstraction at some level in order to function. But maybe it isn't for you, I hope you can get some intuition from my explanation.
  14. You sound like an advertisement because you keep presenting its advantages without actually explaining how the algorithm works and just citing videos (of one hour!) and the overview paper isn't gonna cut it. In other words your confidence in the advantages of the algorithm isn't reflected in the knowledge you display. I mean, do you know why the bitcoin is limited to and average of 4* transactions per second? How is it avoided in Hashgraph? Basic questions left unanswered! Why don't you give a short summary of it's inner workings if you already did the research? Besides that, it looks interesting, it seemingly has it's roots in computational science instead of cryptology like block-chain. Though in the video it is very casually mentioned that the network can be disrupted when the hackers hold 33.4% of the nodes (though control is 66.8% of the nodes) while block-chain has a hard limit of 51% for both. Which sounds significant to me, especially since you don't have a constant push for computational proofs like in bitcoin meaning that inflating your control should be easier. * a block is 2400 transactions and the algorithm controlling bitcoin keeps an average of 1 block every 10 minutes (2400/(10*60) = 4 per second)
  15. Well, to explain the ai thing properly. You have two adversarial networks, an adversarial network is a network with two competing neural networks, one tries to produce fake images which look like real images and one tries to distinguish real images from fake images. The result is a network which is capable of producing very realistic images. So two adversarial networks, in this case, one for winter one for summer, which can produce realistic fake images of both domains. Problem is the translation from and to one of the domains to the other one. In the paper they used an implementation with variational encoders. Variational encoders are used to produce a interface between fake image generation and humans inputting variables. To go one more layer in depth: a adversarial network can produce fake images using random noise therefore each value of random noise corresponds to certain feature in the image. Problem is those variables aren't exactly meaningful for our understanding and trying out every single one of them to map the features which are meaningful is impossible. So you use variational encoders to make use of a Gaussian distribution assumption for each variable. The assumption being that features we find meaningful have a normal distribution in the feature space. The research groups uses two decoders and two encoders and the assumption of a shared latent space to map a summer (or day) image to a winter (or night) image. So you have a encoder for summer and a encoder for winter and you place restriction (shared-latent space assumption) on them that they should encode to the same space. Then the two decoders can take the encoded image from either of the encoders (which is the actual breakthrough) and then produce a winter or a summer image. This shared latent space could of course be extended to any kind of weather though it would add significantly to the computing cost.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.