I think it would be inaccurate because not all matter has six sense bases and the storehouse is itself an aggregate.
The first five are basically one, in the sense that a blind or deaf person is not fundamentally less of a human than the rest of us. The model also misses some stuff, e.g. mere touch doesn’t include proprioception or sense of balance and if you read it as if it did (“body sense”) then why distinguish touch from e.g. sense of taste. Seventh I’d say is a subsystem (and so pervasive that the Stoics allow for both preferred and unpreferred indifferents – yes you can prefer pudding over gruel or the other way round just don’t think it’s a virtue), eighth is a stage of development, what you get when everything aligns well. The impression of a well-lubed machine.
I understand your objections to assumptions matter could be conscious based on this model. I think it would be inaccurate because not all matter has six sense bases and the storehouse is itself an aggregate.
I generally have no real idea of where to put the line. This stuff here might help, anything less than a T3 system can’t have experience of mind (they can’t learn to learn, which requires feeding back information about the changes in mind (for lack of better term) into the mind), OTOH that doesn’t mean that all T3 systems are actually integrating different sources, or balancing them: If you only ever were conscious of one aspect, there could be no conflict or interaction with another aspect, and thus consciousness would serve no role (and not evolve in the first place). It’s a matter of a required number of subsystems needing coordinating, and that coordinating itself having a necessary level of adaptiveness, be T3. Also I can authoritatively say that the human mind is not made to think about that kind of stuff. It’s all maps and models, direct knowledge fails I’m not sure the territory can even understand the question. Look, a squirrel!
In principle subsystems that aren’t awareness can also be T3 systems, I suspect that at least from the motor cortex, mine does seem to have gotten more effective at learning from moment to moment, meaning it learned how to learn better and that’s T3. At least I think it’s not just me learning to not micro-manage it as much, it’s very hard to be sure about any of this, too many intersecting possibilities.
From the cybernetic/information theory side we don’t really know how these kinds of systems work in the first place, we’re barely getting started understanding T2 systems. All the AI tech we have is basically ways to breed fruit flies to fly left or right when seeing certain patterns, with enough computing power thrown at it to look impressive. We already had that kind of tech in the 50s (first implementations 54 for genetic algorithms, 57 for the perceptron), of course less impressive.
I don’t know how this learning takes place though as muscle biology isn’t much of an area of interest for me.
Directly attached to the muscles there’s tension sensors and a simple feedback controller, in essence you can set a set-point like with a thermostat and the feedback loop will keep the muscle at a certain length. Those are then wired up into groups (not rarely overlapping ones) using further feedback loops, that’s roughly speaking the Chinese muscle-tendon lines, turning “lengthen/shorten this muscle” into “open up your hand, the elbow joint, and front of the shoulder”, a higher-level movement that’s generally speaking bio-mechanically sound (see six harmony movement), using advantageous levers etc. It’s all not terribly complicated but is perfectly capable of holding a posture stable against (not too major) interference, it can balance you perfectly on one leg with closed eyes (if you manage to not micro-manage) with an unchanging set of set-points, the actual learning magic happens in the motor cortex (learning how to set the right set-points to achieve a certain posture or succession of postures (ie. complex movement)), which also projects the body’s map into the rest of the brain.
Really hard to not think in terms of hierarchy though.
It’s what Anarchists call hierarchical realism: We all know the multitude of failure points and issues hierarchical organisations have but often the first reaction people have when being told about any horizontal organisational structure is “that can never work, there needs to be someone in charge” as opposed to “that looks interesting, what are the specific points that we need to be aware of to make this not collapse” – as if someone was in charge at the grill party last weekend, as if all of the horizontal organisation we’re embedded in day to day wasn’t actually real, as if order would imply hierarchy.
If you’re looking for a systems science textbook there’s Mobus and Kalton, “Principles of Systems Science”, written for a general audience – academic, yes, but they’re not front-loading it with maths so it’s suitable for liberal arts students (SCNR).
But it’s always multiple loops even if the T4 system seems to be separate from our little 60-80 year lives, it’s not.
Back in the days the genome was called “the ancestors” and revered for all the useful information it hands us. It’s usually quite abstract, it can after all not anticipate our concrete circumstances. Evolution also isn’t random (at least if you ask physiologists): If left to mere chemistry there’d be a disastrously high error rate in DNA transcription, corrective proteins bring that down to practically zero, and then after that is done randomness is re-introduced, apparently in a rather strategic way, to direct adaptiveness: If a bird doesn’t get enough nectar it probably doesn’t make sense to mess around with mitochondrial DNA, what you want to evolve is the beak shape. Evolution seems to be erm evolved enough to be that strategic, maybe not in all aspects, but in the really important ones (important for fitness, that is).
Sorry that’s more Buddhism than AI or cybernetics.
Hey I’m glad meeting a mind that isn’t stuck on either side. Too many esoteric tea-bag swingers on the one side and armchair theorists on the other.
Maybe more fruitfully and approachable, from the Anarchist perspective: Anark has a bit about cybernetic underpinnings of Anarchism included here, thats’s part 2 in a series also going into the group/individual theoretical divide in anarchist theory, the first one goes into the nature of the beast and the third one into how to kill it.
Again Anark, less theoretical but instead going over how and why the Russian and Chinese revolutions failed there’s his the state is counter-revolutionary series, also available as text. But oh boy is everything he ever does long.
The first five are basically one, in the sense that a blind or deaf person is not fundamentally less of a human than the rest of us. The model also misses some stuff, e.g. mere touch doesn’t include proprioception or sense of balance and if you read it as if it did (“body sense”) then why distinguish touch from e.g. sense of taste. Seventh I’d say is a subsystem (and so pervasive that the Stoics allow for both preferred and unpreferred indifferents – yes you can prefer pudding over gruel or the other way round just don’t think it’s a virtue), eighth is a stage of development, what you get when everything aligns well. The impression of a well-lubed machine.
I generally have no real idea of where to put the line. This stuff here might help, anything less than a T3 system can’t have experience of mind (they can’t learn to learn, which requires feeding back information about the changes in mind (for lack of better term) into the mind), OTOH that doesn’t mean that all T3 systems are actually integrating different sources, or balancing them: If you only ever were conscious of one aspect, there could be no conflict or interaction with another aspect, and thus consciousness would serve no role (and not evolve in the first place). It’s a matter of a required number of subsystems needing coordinating, and that coordinating itself having a necessary level of adaptiveness, be T3. Also I can authoritatively say that the human mind is not made to think about that kind of stuff. It’s all maps and models, direct knowledge fails I’m not sure the territory can even understand the question. Look, a squirrel!
deleted by creator
In principle subsystems that aren’t awareness can also be T3 systems, I suspect that at least from the motor cortex, mine does seem to have gotten more effective at learning from moment to moment, meaning it learned how to learn better and that’s T3. At least I think it’s not just me learning to not micro-manage it as much, it’s very hard to be sure about any of this, too many intersecting possibilities.
From the cybernetic/information theory side we don’t really know how these kinds of systems work in the first place, we’re barely getting started understanding T2 systems. All the AI tech we have is basically ways to breed fruit flies to fly left or right when seeing certain patterns, with enough computing power thrown at it to look impressive. We already had that kind of tech in the 50s (first implementations 54 for genetic algorithms, 57 for the perceptron), of course less impressive.
deleted by creator
Directly attached to the muscles there’s tension sensors and a simple feedback controller, in essence you can set a set-point like with a thermostat and the feedback loop will keep the muscle at a certain length. Those are then wired up into groups (not rarely overlapping ones) using further feedback loops, that’s roughly speaking the Chinese muscle-tendon lines, turning “lengthen/shorten this muscle” into “open up your hand, the elbow joint, and front of the shoulder”, a higher-level movement that’s generally speaking bio-mechanically sound (see six harmony movement), using advantageous levers etc. It’s all not terribly complicated but is perfectly capable of holding a posture stable against (not too major) interference, it can balance you perfectly on one leg with closed eyes (if you manage to not micro-manage) with an unchanging set of set-points, the actual learning magic happens in the motor cortex (learning how to set the right set-points to achieve a certain posture or succession of postures (ie. complex movement)), which also projects the body’s map into the rest of the brain.
It’s what Anarchists call hierarchical realism: We all know the multitude of failure points and issues hierarchical organisations have but often the first reaction people have when being told about any horizontal organisational structure is “that can never work, there needs to be someone in charge” as opposed to “that looks interesting, what are the specific points that we need to be aware of to make this not collapse” – as if someone was in charge at the grill party last weekend, as if all of the horizontal organisation we’re embedded in day to day wasn’t actually real, as if order would imply hierarchy.
If you’re looking for a systems science textbook there’s Mobus and Kalton, “Principles of Systems Science”, written for a general audience – academic, yes, but they’re not front-loading it with maths so it’s suitable for liberal arts students (SCNR).
Back in the days the genome was called “the ancestors” and revered for all the useful information it hands us. It’s usually quite abstract, it can after all not anticipate our concrete circumstances. Evolution also isn’t random (at least if you ask physiologists): If left to mere chemistry there’d be a disastrously high error rate in DNA transcription, corrective proteins bring that down to practically zero, and then after that is done randomness is re-introduced, apparently in a rather strategic way, to direct adaptiveness: If a bird doesn’t get enough nectar it probably doesn’t make sense to mess around with mitochondrial DNA, what you want to evolve is the beak shape. Evolution seems to be erm evolved enough to be that strategic, maybe not in all aspects, but in the really important ones (important for fitness, that is).
Hey I’m glad meeting a mind that isn’t stuck on either side. Too many esoteric tea-bag swingers on the one side and armchair theorists on the other.
deleted by creator
Here’s a complexity theory paper talking about anarchy.
Maybe more fruitfully and approachable, from the Anarchist perspective: Anark has a bit about cybernetic underpinnings of Anarchism included here, thats’s part 2 in a series also going into the group/individual theoretical divide in anarchist theory, the first one goes into the nature of the beast and the third one into how to kill it.
Again Anark, less theoretical but instead going over how and why the Russian and Chinese revolutions failed there’s his the state is counter-revolutionary series, also available as text. But oh boy is everything he ever does long.
deleted by creator
deleted by creator