These are half-formed thoughts and scribbles from offline notepads that I'd like to start airing out.
Hat-tip to @nayafia for inspiration on this format of thinks :)
Working in a high-churn culture like a civic hacknight, where 40% of people might be new each week, there are some interesting opportunities to work on non-exclusive and unloaded language for being critical of dominant systems. Here are some turns of phrase I’ve discovered to be useful:
- Instead of speaking against capitalism, speak of capitalism’s lack of curiosity toward certain important things. I often say “I try to work on things that capitalism isn’t curious about.” People embedded in capitalist systems find it hard to be critical of it, likely because it’s hard to imagine or hope for anything else. But it’s easy to agree that capitalism does not display an interest in certain things that humans are interested in. And a very human frame for that is curiosity. Being anti-capitalist is a heavy title to wear, but being skeptical of incurious systems is easy. We can talk about it just as we might speak of skepticism toward intellectually or emotionally incurious people who don’t ask questions of peers.
- Instead of speaking only about “diversity & inclusion”, speak sometimes about generosity of leadership. I like to think of this as creating a positive pressure for leadership, so that it will flow outward. It doesn’t prescribe where that leadership should flow, but if the general environment encourages thoughtfulness around representation, then the folks sharing leadership will inevitable end up on their own personal journey of who they should offer that leadership to. This allows the work to happen more in the headspace of the person doing the work. They will have more ownership of the conclusions drawn. Combined with short rotations of leadership, there’s many surfaces and transition points to learn from. Community projects aren’t companies – you can only once appoint founders, so that decision bears great importance, but if the person stewarding an initiative changes often, then each change is an opportunity for each group to recalibrate on who that culture values seeing in a leadership position.
The complexity video that I linked yesterday talked about the steps that ants follow to make decisions on placing a new nest: info-gathering, evaluation, deliberation, consensus-building, choice, implementation.
Curious whether there are analogies to how people engage in individual-to-individual collaborations (person offering support to person), but also more complex collabs. As in, up through more advanced forms like individual-to-group (person offering support to other group), or even group-to-group collabs (group offering support to another group). This last one is understandably the most complicated way to work together, because knowing self and other as a group is so much harder and less intuitive for our individual sensibilities to navigate.
I’m inclined to see the world through a lens of complexity science, where high-level patterns at one scale (e.g., cell death in biological systems) can perhaps teach us things about patterns that will fare best at other scales (e.g., governance systems in society).
I’ve been slightly disconcerted by the fact that centralization seems to “win” in most every biological system – most every creature that gets to a certain level of complexity, also develops central nervous systems. The skin cell of the finger becomes subservient to the brain cell. What might this say about authorianism vs democracy?
The optimist in me wants to believe that maybe, if the universe favours centralization, then maybe this could be the force that humanity rallies against. Because we’re good at raging against things. It seems to be part of how humans operate, and work together. And so we often rage against one another. But maybe, we could create a shared understanding that the very fabric and tendencies of the universe are conspiring against some shared democratic ideals. And so then we could just rage against that non-human enemy, the entropic laws of the universe itself. What a perpetual underdog story that would be. It could perhaps occupy our attention and energy for a long time.
But anyhow, today, I realized a bit of a catch in some of my thinking. I often look at the patterns in the world around me, and try to learn from them. So I’d look at how brains (one example of a network highly saturated in activity) seem to have evolved a need for sleep, a period of rest. And then I might wonder what that might teach us about how our comtemporary society (another network of nodes that is increasingly saturated with signal) might be begging for similar interventions – periods of rest from the high activity, for re-organization.
Or I might look at how we see the relationship between structure and anarchy in living symbiotic systems that are animal bodies – the hierarchy of the traditionally conceived “host” animal, and the anarchic gut biome that plays such an important role in processing raw components from the wider world. What might that teach us about the role of chaotic spaces adjacent to government? (Hat-tip to MH for a conversation that sparked those particular thoughts.)
I believe that these comparisons aren’t simply analogy, but rather, they’re examples of multiple systems converging on similar outcomes due to the shared properties of networks more generally. Each network operates and is built from different substrates, different components, but there are common underlying dynamics at play. This is what I understand from complexity science.
And this is particularly important in our current situation. We are now running network experiments on global scales. We don’t necessarily have, in our systems, the capacity to run a million iterations of the system. Ecosystems could run millions of cycles among populations of individuals. Bacterial biomes could run many cycles. Some parts of human culture have had the space to run a scant few experiments, as cultures rise and fall. But we’re getting to a place we must learn from other layers of the network, because we don’t have space or time to run the experiments to completion at the scale we’ve arrived, at the global scale.
But anyhow, it occured to me today that perhaps there’s something I should be wary of while looking to learn from the patterns that have “won” at lower levels. Because those patterns are often ones that emerged in networks where hierarchy prevailed. So I should be cautious not to look too much to them for my learning. Because perhaps some of them are patterns of appeasement to hierarchy. Perhaps some of them represent ways to survive in the cracks between towering structures of central control, rather than an imagining of what might be possible if we exercised human-scale democratic values at our level.
(Thinking back on the intuitions involved in our conscious experiences) Maybe the evolved trait that is consciousness, is more about how we feel the sensation of the ride. Like a surfer’s developed instincts, honed for riding waves. We can carve that wave, and enjoy the simple thrill of navigating it, but the major aspects of the experience – the orientation, the shore as ultimate destination – is a force beyond our control.
(Building on a prior thought about what underlying process might be getting hijacked in the “bad” scenarios of “fast” and “slow” AI, ie. conventional general AI and corporations, respectively.)
So what if “synchrony” is the thing that all our human intuitions have evolved to select for. Our laughter, our sense of belonging, our falling in love, our enjoyment of engaged conversations. Perhaps these all increase the synchrony of our collective endeavour of humanity. And in that, a process of converting an “other” into a “self” – something that lights up our mirror neurons, like watching a sports team (your sports team) working together on the other side of the airwaves, or speaking across the couch to a lover, or even across the corpus callosum to a self that’s lucky enough to already be in agreement.
We subjectively feel these human experiences as something more, but perhaps the universe fundamentally understands them only as a collective of matter, increasing synchrony with itself.
Ok, so if this is true, then what are the dangers? How can these intuitions that largely serve us well, that help us to create synchrony with our fellow human actors – how can this be hijacked?
I’m starting to wonder if that is what feels do dangerous about corporations. These are deeply inhumane creatures, operating largely with very different motives. Yes, we’ve encoded these motives in laws at our scale. But what emerges is perhaps above us. We have allowed these things to optimize according to rules that, if operating at a human level, would be undestood as psychopathic and anti-human: Pure profit maximization is condoned.
And that’s nothing new, but what is alarming is how these things can start to run with a life of their own, and seem to use humans as “peripherals” of sorts. They deploy people with certain skills and motivations that are perhaps altogether human and sincere. I sometimes imagine employees as some sort of sci-fi puppet on a stick, being thrust into our realm. These people are given autonomy, and they often act in good faith, but they are selected by the larger aparatus for this sincerity in which they operate. Their very authentic interactions can run cover for the very inauthentic mechanisms at the core of corporations – the profit maximization.
And back to synchrony. What I worry here is that the corporation has a very different synchrony that it is propogating in the world. It has its own patterns and logic, and they are not ours. They’re building something else. But when persons representing these larger interested interact with people, they feel like they’re increasing the synchrony of human processes. They feel like they’re pursuing very human goals, down in the trenches with us. All the participants experiences and intuitions might even be telling them “YES, we’re doing something pro-human. We’re becoming more aligned. We’re becoming in sync.” But on the backend, where the power is, there’s another order building.
It’s perhaps a little like how we imagine “fast” AI might go awry. If we were to meet one, we might feel it’s connecting with us on the front end. Its brows might furrow in the way that make us feel understood. It might respond in a way that we understand to mean it’s feeling for us, becoming aligned with us. But these are just the movements of servos and the motions of human projection. The servos are not evil – they are not good or bad. They are peripherals. They are serving something behind them.
The octopus is an interesting creature. It has a distributed brain that extends into its arms. It’s central brain gives loose orders, but the limbs actually do some of the work of knowing what to do. It seems to be a common curiosity, to wonder aloud how alien an octopus consciousness might be. But perhaps we already know. Perhaps we’re living in a system not unlike the octopus’. We are appendages of things above us – we process, and integrate, and make decisions. Maybe we can’t know what it’s like to be the octopus, but maybe our navigation of the corporate entities of capitalism tells us a little about what it’s like to be the octopus’ arms.
In conversation with my friend SL ages ago, had chance to work through a thought that maybe society is like a board of flickering, disjointed blinky lights, each representing a conscious creature. The acts that we’re engaged in are at their core an act of creating synchrony in that blinking, and holding it as long as possible. So while the board is a proxy, with just one dimension (brightness), we are navigating uncountable other dimensions of potential synchrony. Many of these dimensions are antagonistic. There are countless ways to resolve these systems of equations.
I feel this view is informed by a TED talk I once watched, Do we see reality as it is?. The speaker openned with a story about a beetle that knew to recognize a shape as a mate. That simplified assumption for how to move in the world, it held true for a long time. Until an Australian beer company created a bottle that tricked that beetle’s intuitions. Instead of finding mates, it began choosing the dead-end option of fucking the bottle. And it started to go extinct. Its intuitions, which used to serve it well, were now being hijacked as the environment changed. And while this instance was a change outside its control, humans could theoretically do the same thing within their own environments, sending themselves on a more complex dead-end trajectory.
But I’m curious what underlying process is being hijacked. For both the beetle and for us. What common process is being navigated, that our intuitions are highly tuned to optimize for, but that is being thrown out of whack. And I wonder if it’s sychrony itself. I find this supported by recent research that shows that deep conversation (presumably creating a subjective sense of reward in participants) results in sychrony of brain activity under MRI scans.
So what if the things that we’re moving toward as conscious life (of which “intelligent life” is just a specific subset) is an increase in sychrony with our fellow travellers – the human persons, animals, plants, buildings, internets, and architectures of all sorts. Maybe that’s the thrust of it. A sort of cosmic like-attracts-like of consciousness.
So if that’s the case, then our experience – the things we desire – are just a proxy for the baser need and drive for sychrony. Our senses and intuitions are simply the things we’ve evolved to root out that synchrony. Just like the beetle evolved this attraction to recognize a thing like itself, which implies an underlying synchrony of information and simple concepts within its mind. Perhaps evolved language itself is just our way of seeing other deeper, more nuanced synchrony within the minds of other beings like ourselves.
And these thoughts lead me to the worries. What might we be engaging in that’s like the beetle? What bottle are we fucking? What things are we pursuing away from life, with miscalibrated sensors, seeking sychrony and finding only hollow vessels? I’m still working through this, but I suspect there’s something to be learned about how we can navigate our future “fast AIs” and our contemporary “slow AIs”, the corporate structures we find ourselves navigating amongst as fellow travellers.
Thinking about speculative civic sci-fi related to smart cities. Wondering if maybe human culture is best thought of as a probability cloud, not a state machine. Just as the lightbulb was invented in several different places near the same time, a specific arriving of a future is not a singular event, but an potentially inevitable field of moments evoked. And if this consideration of society as a probabibility space is correct, then it’s less of question of are these people doing this good thing or this bad thing, but can they. If they can do the bad thing, then that future is more adjacent.
Or considering through a social physics lens. Through that lens, we think about possibilities through how many hops away an idea is between people. Maybe we can consider the world we want by how many hops away it might be – good or bad – from a desireable or undesirable possible reality.
Metaphor: We’re perhaps moving through the part of the marble tilt maze where the holes run thick and the steel ball drifts lazily across narrow surfaces in parabolic arcs.
(Thinking on “Brief History or Everything”) Considering emergence and “holons”, that which are both wholes and parts. Maybe commons-based peer production is favoured because it involves small pieces that can be shuffled and restacked in search of emergent properties.
We are matter that encodes descriptions of itself. We rattle the air between us. We are ball lightning.
How might super-intelligent descendants of cold-blooded lizards think of or describe windchill? Compared to us, it’d be so much more dangerous–a killer. How would their culture understand it, and talk about it?