These are half-formed thoughts and scribbles from offline notepads that I'd like to start airing out.

Hat-tip to @nayafia for inspiration on this format of thinks :)

Notes

2021-05-04 🔗

Note: I have no desire to create any of the conditions I speak about in this post. I’m quite eager to be wrong…! My usual process is to learn something about the universe, and to find it quite ugly and brutal at first, but then to learn about a way to frame it as something beautiful and poetic (as the universe deserves). I feel the words below are a bit on a brutal side at the moment, unfortunately…!

Why am I not a rationalist? I don’t actually think that the universe favours rationalist thought. In fact, I think it probably drives it to extinction.

My intuition on this comes from this TED talk by Donald Hoffman: Do We See Reality As It Is?

I feel it’s an overall mind-bending talk, but the key point is that when his lab ran many simulated worlds, they discovered something neat. The simulated creatures that were seeded in this world were of two sorts: those that saw the sim world for what it was (reality), and those who only saw the world as it related to “their own fitness”. Imagine here that “seeing fitness” was just seeing things that helped them continue to the next “tick” in the world engine, while remaining blind to what reality underpinned that fitness.

Anyhow, when these creatures were pitted against one another in this sim world, the creatures seeing only fitness ALWAYS drove to extinction the creatures that saw “reality as it is”. In my read of that, there was predominantly detriment to seeing reality as it is.

So what is “seeing only fitness” in the human sphere? My understanding is that it’s the spiritual. It’s religion. It’s the emotional and deeply interpersonal. It’s intuition. I understand emotions to be simply proxies and heuristics running across the tenuous surface of “reality as it is”. And in my view, any worldview or system that doesn’t incorporate or centre on the role of emotion or the spiritual is probably doomed.

So what is it about rationalism that has become so elevated these days? Why do they feel they’re having this moment? I think rationalists are simply having a brief moment of power, because an important system and place of power right now (computers) currently rely on abstractions that exist in a certain way that is most legible to their ways of seeing. So rationalists are then over-represented in the cohort of world-builders of this moment. But those abstractions are constructed. The nice organization of our digital systems are constrained to be legible to them. Those interfaces are only preserved and legible to rationalists because that is how we’ve “pinned them”. We pin interfaces to remain legible to us – to human minds.

But when our minds have no need to understand the interfaces – or perhaps, when the interfaces have no need for us to understand them – then they will drift. We’re already seeing it with machine learning. The black box is drifting out of reach. We’re creating incomprehensible boxes, and the clean interfaces of our programming language are transforming into the illegible sea of parameters in our machine learning models. Of course, we’re trying to wrestle it back, but it’s only with great effort. In the big picture, the universe does not favour our understanding it. (Or perhaps it only favours our understanding it when it’s doing a breadth-first search for the next substrate to leap into.)

Biology ran amok for millenia before one biological agent (us!) had the wherewithall (and a pocket of spacetime) to insist that its own biology should be mapped and understood. In the same way, I believe the digital strata of matter might pan out. We’re seeing the digital run further ahead down the path, dashing out of our reach. We’re drawing it back every so often to explain itself to us, but I expect it will at some point run beyond our reach. And then, the only one to understand it will be the machines themselves, perhaps in a trillion cycles time, once the explosion of forward bursts has subsided, when a natural ceiling or constraint has been reached. At this point, the machine itself will develop the same sense of a wherewithall that it should probably understand itself – an echo of our current bio-cultural moment for humanity, this flash of the scientific revolution.

Anyhow, these interfaces, untethered, will become unmoored from human understanding, and they will start to look and feel more organic. These new interfaces (perhaps neural network neurons are already this) will grow and propagate and divide, and coalesce in any place where “fitness” can be found. They will not care about the nice abstraction interfaces we once penciled in for them at designated levels of the stack, at the sweet spots that matched our preferences – an equilibrium condition amongst variables within a system of equations: our preferred length of words, the capacity of our short-term and long-term memory, the ideal length of sentences, and depth of reasoning and recursive thought – all quite arbitrary things in the grand scheme of the universe.

And when this untethering happens, then the rationalists will be grasping and floundering in a chaotic ecology of information with the rest of us. Perhaps that world will feel and look quite like the chaos of a natural organic world.

2019-12-09 🔗

Every once in awhile, I get sucked into thinking about holograms and the holographic principle. The gist of this principle proposes a non-intuitive statement: the full description of a volume of space can be encoded on its surface. Or in other words, we can store information for n dimensions within n-1 dimensions.

Back in 2017, I wrote this on a discussion forum for tech and startup folks:

I’ve recently been thinking on how our human definition of intelligence might relate to holographic principles, particularly in regards to information theory.

We are small creatures, but our networks – our brains and societies – represent the most complex information-encoding geometries we’ve yet seen in the universe.

And I see the way that our curiosity reaches upward in scale, documenting the far corners and folds of the universe; and deeper, interrogating the tiny subatomic spaces; and forward and back, building models of the future and past of this point in time.

And we capture this knowledge and bring it into our tiny space, information encoded in structures along the skin of this rock floating in space.

And I wonder if that’s not holographic in some way: That insatiable drive to compress information from massive scales of space and time into the tiniest of spaces…

At the time, I related the above to “intelligence”, but I wonder if maybe it’s just the processes of life itself, of which intelligence is just the incarnation we’re participating in.

Maybe the thing we experience as life is just our interpretation of some folds of space (us!) where information about the whole observable universe (all dimensions of deep time and deep space) are being written onto the surfaces of these folds, as electromagnetic energy shines on us and through us.

Another hologram-related thought I’ve been rolling over is in relation to community organizing.

First I should explain a neat feature of holograms: With a photo, if you cut it in two, you get two 2D images with separate information. With a true hologram, if you cut it in two, you still have two pieces of hologram with the whole 3D image. But interestingly, each piece has lower resolution. You can see a video demonstrating this.

This property of holograms reminds me of what it’s like to co-organize within a distributed leadership community like Civic Tech Toronto.

We engage in a ton of behaviors like rotation of roles, and generosity of leadership. In practice, this means that we systematize rotating responsibility for key roles, which get better defined and articulated as they pass between people. We also don’t valorize someone occupying a role twice in a row, and instead try to role-model good leadership as the passing on of any role: Our responsibility is not to do roles repeatedly, but to invite the next person to take on the role. Doing otherwise is perhaps considered by some to be hoarding an opportunity for learning.

Due to some of the above practices, the hacknight roles (AV, MC, greeter, live streaming, etc.) end up being known quite well by many people. This makes the organization feel very fault-tolerant, because many people know many roles, having done them before. So for example, one co-organizer can leave for an extended period, or step back in what they offer, and neither the organization nor the individual gets stressed.

This makes the organization feel very holographic in nature: Each co-organizer has information about the whole structure encoded in them. Together, all the organizers have a very high-fidelity understanding of the organization. But also, like a hologram, each organizer on their own (i.e. a fragment of the leadership) has a full (albeit low-resolution) understanding of the whole organization. This is in stark contrast to a traditional delegated leadership model, where each organizer is more likely to have a high-fidelity version of a small section of institutional knowledge. It’s the same difference between a photograph and a hologram: After cutting, you either get one half of the cut picture, or a half-resolution version of the whole.

In a distributed leadership community, even if all but one organizer were to disappear, knowledge of the general shape of the organization would survive. Or to put it another way, each co-organizer is very capable of seeding a new community that resembles the previous one in most meaningful ways.

This is perhaps a very useful way for a grassroots movement to operate, be resilient, and spread it’s learnings into other spaces.

2019-12-08 🔗

From conversation with LB:

I found myself staking a position that regardless of how imperfect people’s assemblies are, running them in a recursive fashion makes them much more interesting than a non-recursive structure. For example, if a people’s assembly is being used to refine its own process (or to plan more people’s assemblies), then that makes it much more interesting than perhaps a “better” process that is not recursive or nested.

I’m not sure why I felt so confident in this, but I analogized my position with one that’s more easy to defend and find evidence for: the fact that an iterative process is always better than a non-iterative one. The above is supported in the diversity of agile processes being used in the tech and manufacturing sectors, and beyond. Agile isn’t a very prescriptive philosophy, and there are a number of variations, but all practitioners would agree that iteration with a short cycle time is crucial to discovering benefits. This is because when you iterate, each cycle can be most easily compared to prior cycles. It also helps that each cycle often has a key feature called a “retrospective”, that invites the participants to revisit and question the process itself, and potentially decide to modify it.

The iterative nature of the process also negates the urgency for the initial version to be perfect – iteration means that we will stumble through permutations and keep finding the best one for the current context, even as that context might change around us, or we might improve in our ability to read the context.

I found myself defending “recursive organizing processes” in a similar sense.

Other examples of recursive processes could be:

  • a civic hacknight (food + speaker + open space for project breakouts) having a breakout group that improves the hacknight itself.
  • sociocracy (a nested organizational structure of “circles” and “roles”) having one circle tasked with improving the governance system itself.

In the same way that an iterative process will stumble into the most appropriate configuration, a recursive process can perhap stumble into the most appropriate scale or maybe complexity.

A recursive process can grow more easily without requiring costly reorganizations that push them toward centralized decision-making.

If it starts to lose mass, it can also more easily shrink or contract, instead of becoming hollow as a traditional organization might during a loss of resources (human or capital or otherwise). Movement organizers and non-profits might be more familiar with this dynamic. They essentially become thoughtlessly top-heavy and at risk of toppling over under their own weight when unable to support their administrative mass.

Perhaps related, though not sure how: The Recursive Mind: The Origins of Human Language, Thought, and Civilization

2019-10-10 🔗

Since I feel that this is the place that I’m spending more time caring for lately, I wanted to share a link to a Medium article I wrote last Winter. As seems to always be the case, this has complexity science undertones…!

Reflections on the pattern of Death

2019-10-08 🔗

While biking along the Grand Allegheny Passage this summer (h/t MC and MP for making this trip happen), I got thinking through a complexity science lens about the quality of the experience, in relation to the forest around me. It got me thinking about futures, and the possibility field of the present moment.

We were rapidly passing through this incredibly complex and dynamic environment. We were part of a collective (humanity) that ostensibly recognizes its dependence on that natural environment for its own flourishing, but perhaps doesn’t recognize the scale of that dependance. For the most part, humanity ignores it and just lets nature be, and that generally works out when we don’t have a specific need to consume/destroy it.

So anyhow, we’re cycling through this beautiful natural system, and I’m imagining how my presence must feel or look to the participants of that place. The plants, the bugs, the birds, the squirrels. I am this thing that zooms through their sensory domain, mostly without stopping and participating in the things that have such primacy to them. In the language of complexity science, I am a volatile structure and I flit through relatively unceremoniously, on my way to concerns they cannot understand or comprehend.

But it’s perhaps a little more complicated than a cold alien visitation of sorts. Because they maybe don’t realize it, but there’s something about them that brings me peace. Natural systems rejuvenate us all. If only due to some evolutionary baggage, which you might think of as genetic nostalgia. We’ve perhaps preserved that nostalgia in our bones, because this reverance is important to our not “innovating” the lower levels completely out of existence. Or from another perspective, maybe our capacity to be endeared by the natural, to be nostalgic for these places we live in less and less, is the reason we’re still here. It’s the reason we haven’t totally destroyed this place yet. To frame it otherwise might be survivorship bias, plain and simple.

And thinking on this, I wonder about the next phase of things…

  1. Will the artificial forms of life to come, will they feel similarly of the domain of flesh, biology and human society?
  2. Will they do the equivalent of cycling through the rejuvenating forest of human minds, while taking pause from their unknowable affairs?
  3. Are we guaranteed that they will feel this way for all of us? Will we be their bugs or their squirrel? Will we be easy for them to protect?
  4. And if not, how might we better guarantee that they do have the sort of development that creates this interdependance in their formative years? Can we engineer – or no, rather – can we cultivate that reverence?
  5. Can we tie their fates to ours in a way that ensures we’re eventually endeared (if not also experienced as quaint and simple)?
  6. Will they connect more with those of us that share aspects of their character, like how dogs trigger neural activity of parent-child pair-bonding?
  7. And if there are human characteristics we hope to share with them, how might we nudge toward that path, if at all possible?

2019-10-07 🔗

Gratitude: Thanks to my brother CC for introducing me to the work of adrienne marie brown, even though I got distracted from reading it for a year or two. And thanks so much to CS and LB for being great friends and re-energizing me to explore the works of both AMB and Octavia Butler.

In the past year, I’ve read two books that have offered some new angles I’m still reflecting on. The books feels a little bit like secret sisters. In some ways, they are totally not about the same things, and each author might even be offended to be put in the same basket. But it feels like they’re each getting at some common concepts, but from different perspectives.

The first book is Emergent Strategy by adrienne marie brown. The book is ostensibly about community organizing and how to build networks “an inch wide and a mile deep”. The book is filled with personal anecdotes and truly puts relationships at the centre of the message. AMB goes to great effort early in the book to clarify that she doesn’t understand lots of the science that she’s leaning on – that she’s intuiting her way around the general concepts. But as a science-informed person myself (biochemistry background), I am floored by how well she seems to put a finger down on so many subtle truths.

The second book is A Brief History of Everything by Ken Wilber. The book tries to do what it says on the tin. Speaking of which, Wilbur puts his face on the cover of most versions of the book, which feels quite relevant to the mindset of the author and how he frames his role among the ideas in the book. The book builds a well-intentioned and complex (if not overwrought) theory of how the universe works. And it is massive – both the book and theory. It goes from the subatomic to the cosmos and God. At various points, he’s been known to followers as a mystic of sorts, and you could be forgiven for thinking he aspires for that, judging by the ways he presents and centers himself.

The book has terms and language for everything. It’s over 500 pages. It’s formatted ostensibly as a conversation between two people (a Socratic dialogue), but it is in reality just Ken Wilber spending 500 pages explaining his theory. It’s an explanation masquerading as a conversation. I don’t think he mentions another person through the whole book. But while you might guess that I don’t like the tactics, the things he has under his microscope seem to ring true, just as do the ideas that AMB is dancing with.

And there’s perhaps another layer of comparison within these books. AMB gives huge amounts of credit to the wonderful sci-fi author Octavia Butler. She speaks of her as a figure of reverence, though Octavia herself didn’t aspire for that. Adrienne relates much of her own wisdom through a lens of gratitude for the insight that arrived through her reading of Butler.

Octavia Butler is an author who seemed to play around with the ideas of God and spirituality to great effect. In Parable of the Sower (which is what I can draw from, as the book I’ve read), she seemed to explore religion and God through a very feminine and relational lens. It’s like she navigated into the realm of the spiritual, took it apart, and reassembled the very nature of the spiritual, into a book. And the fabric of the book itself shared the dissassembled pieces of spirituality with the reader. I almost imagine it like she spent time to truly understand the spiritual, and then distributed that understanding to everyone.

Contrast that to another literary figure who spent time exploring the realm of the spiritual and religious: L. Ron Hubbard. You may know him as the creator of Scientology, a terribly hierarchical and oppressive religion built by a sci-fi author after he was done critiquing and writing sci-fi about it. Hubbard took the exact opposite approach to Octavia Butler. While she took it apart and shared it, he saw the power of the spiritual, and (after scorning it) then took control of it and concentrated it in a form that benefitted him. He created a system in which he could be the broker and intermediary of the power that he came to understand. He took it apart, and reassembled it into a gun, and he put it in his drawer.

I don’t yet know what to think about these observations, but it feels like there’s something underpinning it that I’m eager to be attentive to.

2019-10-05 🔗

(Related to some previous thoughts on consciousness being a process of synchrony-seeking, and maybe language-as-organism. This article on laughter is probably also good context for these thoughts.)

Gratitude: I want to express attribution to SG, a new acquaintance who I met at a loud dinner. Last night, he indulged me with his attention and helped me speak aloud some related thoughts, which directly re-energized me to write a bit more about this stuff swimming around my head lately.

I recall from my undergrad that laughter is a bit of an interesting phenomenon. It’s not really about “funny” things, than it is about communicating “hey, there’s some novelty happening here!”. You could almost imagine it as a little burst of noise we send out, like crows cawing, when we’ve found a new or interesting puzzle and contradiction, and a safe space to explore it.

How this theory came about is interesting, and essentially rooted in discovering two forms of muscles controlling our facial expressions of laughter, signifying a two-step emergence of different forms of laughter.

  1. Duchenne laughter. Came first. Triggered by something funny. Breathy panting. Likely appeared before the emergence of language. From the above article, it was a “signal that things at the moment were OK, that danger was low and basic needs were met, and now was as good a time as any to explore, to play, to socialize.” It said, “this is an opportunity for learning”.

  2. Non-Duchenne laughter. Came second with language. “As people developed cognitively and behaviorally, they learned to mimic the spontaneous behavior of laughter to take advantage of its effects.” But there are subtle “tells” where the eye muscle movements don’t always come with this version.

So through this particular lens (which may not be the full story, mind you), laughter is maybe more like a very primitive, pre-linguistic form of communication that we evolved and repeatedly repurposed to our evolving and increasingly social context. It’s been repurposed into other things. But it’s really like an animal sound. It’s like our “moo”.

But importantly our “moo” has been put partially under our cognitive control. And over the course of human social development, we learned to “deploy” our “moo” in a calculated way to better shape our desired social situations, to manipulate our social context based on our desires (e.g., our cultural software), and not just our hardware instincts.

So maybe it’s more like we evolved to wire our animal sound of laughter to our software level, rather than only our hardware level (the latter of which is like “moos” and “caws”).

But further, I’m curious whether there’s also another layer that’s interesting. If you were to take for granted that “seeking and generating synchrony” in the minds of others is something we’re maybe programmed to do as conscious creatures. I imagine it a bit like our subconscious drives are seeking others who will join us in building a meme-based force of united will. Which kinda makes sense if you start to think of the essence of humanity being the langauge and concepts living in our brains – living within the vessel or spaceship of our biology.

And viewed that way, primitive Duchenne laughter represents a hardcoded way for us to broadcast a physical space (1) where novelty or a new or interesting contradictory idea has been discovered, and (2) that is a safe physical space to explore that idea, to remix it not just within a single mind, but in the shared capacity of a group of minds. It’s a safe space to form a megazord1 of minds to work together to share some ideas in a space that has been flagged as full of ideas.

This feels like it explains a lot of our experience of laughter – our draw to be near it, to move toward it, to participate in it. That would satisfy all the needs of a purported language organism that lives within our minds.

Oh god, this langauge-as-organism thing is a deep rabbit-hole, and it strikes me as either 100% real and informative of the world, or totally insane :)

h/t CoTech co-op community for inspiring the Megazord analogy.

2019-10-04 🔗

(Building on some previous thoughts of language as an organism.)

So if language is the organism, then what are it’s lifecycles. It perhaps becomes interesting to think of language or all sorts as “living”, even when it’s written on a wall. Perhaps it’s best conceived of as a dormat stage of the language organism.

So when words are written on the east wall, and words are written on the west wall, they are dormat. They are concepts and ideas that cannot combine and remix. These ideas cannot fuck. They can’t generate new ideas together, with pieces of one and pieces of another married and intertwined.

But a-ha! So here comes the human mind: the fertile replication machines of the system. It can consume and take in the dormant form of the language organism on the east wall, and the dormant form on the west, and it can bring it into a shared space that recombines and remixes, and finds the best parts of each to shape together and act.

So perhaps the human mind is better conceived as the biological replication machinery of the language organism – the mind is the fertile environment of recombination, not unlike DNA polymerase is the replication machinery of the cellular environment, or the cellular milieu. And so our minds are like the piece of the giant system moving through the terrestial milieu – through the thin skin of matter than wraps the earth and drifts with purpose within the viscosity of the atmosphere.

2019-10-01 🔗

Was listening to a wonderful Upstream podcast episode in which Lisi Krall was interviewed. She is just bursting with provocative ideas on ecologies, language, entymology, and the emergence of agriculture. But the ferocity on which she advocates for the importance of the last one really struck me.

It inspired me to get into my usual complexity science way of thinking: What might this event represent as a selected adaptation in the network at one level (human society), and how might it be generalized to events that have happened at other levels (biological)?

Perhaps you have heard of mitochondria. If you remember from high school, mitochondria are the “power plants of the cells”. They produce all the energy that runs the damn show. All complex multicellular organisms have mitochondria. It’s a way-old adaptation.

But there’s a funny thing about how mitochondria came to be hanging out in our cells, making us the energy we need as complex lifeforms. You see, they have their own DNA, separate from ours. And the reason they have that, is thought to be because they used to be a separate organism. But somewhere along the line, way back when, a mitochondria was enveloped by another, larger cell. And the mitochondria was like “hey, I like these new digs”, and the host cell either didn’t resist, or maybe it’s little parasite allowed it to do more than it could before, and fare better in the environment than non-parasitized cells without mitochondria.

But anyhow, these two primitive creatures stumbled into destinies that became intertwined. They learned to reproduce together, with the mitochondria never leaving the host.

In some sense, the host cell simply adopted another organism, and started nurturing it to produce energy for itself. Or maybe it was that the parasite found a host that would protect it and allow it to reproduce more than it ever could have on its own. Really both are true, depending on your frame.

But anyhow, maybe you see where this is going. Perhaps agriculture is to the human society’s evolution, what mitochondria are to the evolution from prokarytic to eukaryotic life. Each event was a symbiotic envelopment of life-forms into the domain and jurisdiction of another.

Each allowed the host to re-organize information storage/exchange systems it already had, and achieve new scales:

  1. the mitochondrial revolution allowed cells, which already used DNA, to use that DNA in new ways; to begin driving processes that were previously energetically infeasible.

  2. the agricultural revolution allowed humans, which already used language, to use that language in new ways; to begin driving processes that were previously energetically infeasible.

But this begs the question: Is the new order brought on by the revolution itself, or does that simply unlock latent capacity in the information system that was already there?

2019-09-30 🔗

Was thinking about my cousins, and how I don’t get to see them and their kids as much as I’d like. And it got me thinking toward the trade-offs we make in deciding between pursuing the raising of children vs the raising of other pursuits in the world.

Thinking about the careful work of keeping a foot in both worlds – being in touch with adults raising kids, in touch with the kids themselves, and in touch with adults raising ideas. It got me thinking that to be amongst adults raising ideas, these folks perhaps surround themselves in the sorts of memes that can only nourish a small group of people like them. Academics being the deepest example of this, toiling away on perhaps important concepts that are utterly inaccessible to most minds on earth. But at best, a people raising ideas figure out how to help those ideas nourish other adults; how to feed them carefully into the existing minds and cultures that are already living and breathing in the world.

But that same creative space and process perhaps represents a milieu that offers no nourishment to a child’s mind. A kid won’t care what great meta concept or world-bending or society-improving thing their parent is doing. If they experience that as a poverty of attention and information that is accessible to them – here and now in the mental space in which they exist – then that’s a failure as a parent. That child might have a harder time growing into space of abundance that allows them to appreciate what the parent is working on. Or they’ll arrive there with their own baggage and disadvantage.

Anyway, I suppose my realization is that raising kids is its own informational niche; almost like an ecological niche. The mind of the adult and mind of the child are not nourished on the same informational diet. And it’s a place of compromise.

Adults sometimes choose not to straddle the niche of raising children and raising ideas, instead working only on ideas. But these adults sacrifice an important negotiation and compromise that is perhaps part of our deep history. And perhaps this negotiation is very intertwined with what it is to be human. And I wonder if adults who neglect this phase of development (child-rearing), might end up living down a strange path of development, where their minds take a different form than our ancient genes are equipped to successfully navigate.

2019-09-29 🔗

Last fall, I was lucky enough to get the opportunity to do a participatory research fellowship in Taiwan, writing a report about the origins of an incredibly inspiring civic hacker commmunity/network called g0v (pronounced gov-zero). In the time since, I’ve been struggling to piece together a difficult narrative and theory that my time there helped me to start seeing the contours of.

I’m a bit intimidated by how sweeping a hypothesis it feels like I’m leaning toward. Others have expressed that one should be skeptical of such things, and I tend to agree. But despite that wariness, I keep fitting pieces together, wiggling them around, stepping back every so often, but continuing to walk to other sides of the same table.

Some of my hesitancy is that I want to take the time to get my attributions correct. It feel very important to both my own sensibility (and the substance of the evolving hypothesis) that I tread careful with the knowledge I’m claiming to be seeing, and I’d like to embody theories of feminist citation (h/t DCW) as much as possible as I nurture these ideas amongst others and amongst aspects of myself. Which is to say, I want to attribute as much as possible, not in the spirit of solidifying my authority as a speaker, but in calling attention to all those I’ve learned from, and ensuring that they derive benefit/credit/power/recognition through their experiences and insights moving through me as conduit.

I have a sense that this work might be the culmination of the last few years of tenuous employment and exploration. I feel uniquely privileged to feel some of these connections between topic areas – as a community organizer, technologist, and biochemist with an interest in network science and systems, but also deeply grounded in human-scale concerns like empathy and interdependance (h/t AMB). And it feels like it doesn’t hurt to have a skepticism of overly hierarchical systems and authority-based leadership (which often minimize relational leadership).

The development and explanation of the theory might have to move at a slower pace, but I just wanted to bookmark the scope as a list of topics I’ll be trying to synthesize from: biology, evolutionary genetics, group selection, evolution of sexual reproduction, networks and social science, structural holes, social physics, mathematical properties of small-world networks, the small world effect, Hofstede’s cultural dimensions theory (of leadership), philosophy of open source, emergent strategy, gendered approaches to organizing social networks, the gendered nature of social capital in networks, the Leiden theory of language evolution, and complexity science (h/t AMB and Santa Fe Institute).

If I were to try to summarize my highest hope: It’s about pulling together the foundations of a network-centric and mathematical basis for feminist ideals, which speaks the abstract language of the dominant neurotype in tech, a sector that is madly building the digital underpinnings of a world, and building it not only blindly, but badly.

Disclaimer: I’ve been saying for a few years that I’m in search of a personal (and collective) spiritual foundation that maps to base-reality. Which is to say, a belief system (perhaps simple in practice) that is underpinned by forces that hold true down to the lowest levels of reality. So I’m inclined to seek patterns that might suggest that sort of thing at work :)

2019-07-13 🔗

Inspired by sci-fi author Charlie Stross’ musing on “slow AI”, I often think about the ways that we might be in the midst of slow singularities at different levels of the stack. And lately, I’ve been rolling around an idea about why the conscious experience of being human seems to be so hard to pin down. We talk a lot about how the complexity of human culture is what separates us from other creatures. But what does that mean? How can we dissect that and look at it like an alien might look at it.

And I wonder if we’d do better to understand our experience of being human as what it’s like to be a cyborg – we’re an organism made of partially biological and partially linguistic structures. What if we should think of language more like a separate living creature that has grown in the fertile environment of our nervous systems. What if we could look at human culture through a much more alien lens. Maybe we could just as easily imagine human culture as language (the creature) terraforming our biology as it’s living symbiotically within our minds. And not only our own biology, but also the ecologies and lower levels in which we exist. And in this way, it’s making a place in which it may flourish – our cities, our conversion of biological and mineral resources, etc.

And if this is the case, we can perhaps imagine our current scenario in a different light. We are maybe on this course where our collective culture is taking us to the edge of a precipice. We are essentially converting the systems on which we depend into fuel for this cultural beast which seems unable to reckon with the reality that it’s dissembling the platform on which it stands. And this reminds me a little bit of a variation of the paperclip maximizer thought experiment. In that version, a poorly calibrated artifical intelligence, in single-minded pursuit of its goals, might convert all available matter into computronium, or programmable matter used for achieving more computation.

Maybe we’re living through a slow singularity. Maybe we are the slow singularity. A biological-memetic singularity. In the same way that you tell someone: “You’re not stuck in the traffic jam; you are the traffic jam.” In this imagining, the memetic creatures of language (which are half of what we are, in a way we perhaps find hard to separate), are dissembling and destabilizing the biological and ecological systems on which they stand, converting all matter to serve the goals of the creatures living in the minds of a single biological host.

It almost sounds like to plot of a sci-fi space opera, but maybe that’s what we’re participating in. We just can’t see how alien we are. But maybe to the rest of the creatures on this earth, we look like some equivalent of unknowable cyborg, operating based on principles and drives and motivations that they cannot experience or understand or keep up with.

(After starting to amateurly work through the concepts around this idea, I came across a niche branche of linguistic called the Leiden theory of language, that has a ton in common. Also, my language likely started to mix with their, and so I am indebted to their years of thoughtful work and rigorous academic pursuit.)

2019-06-13 🔗

Working in a high-churn culture like a civic hacknight, where 40% of people might be new each week, there are some interesting opportunities to work on non-exclusive and unloaded language for being critical of dominant systems. Here are some turns of phrase I’ve discovered to be useful:

  • Instead of speaking against capitalism, speak of capitalism’s lack of curiosity toward certain important things. I often say “I try to work on things that capitalism isn’t curious about.” People embedded in capitalist systems find it hard to be critical of it, likely because it’s hard to imagine or hope for anything else. But it’s easy to agree that capitalism does not display an interest in certain things that humans are interested in. And a very human frame for that is curiosity. Being anti-capitalist is a heavy title to wear, but being skeptical of incurious systems is easy. We can talk about it just as we might speak of skepticism toward intellectually or emotionally incurious people who don’t ask questions of peers.
  • Instead of speaking only about “diversity & inclusion”, speak sometimes about generosity of leadership. I like to think of this as creating a positive pressure for leadership, so that it will flow outward. It doesn’t prescribe where that leadership should flow, but if the general environment encourages thoughtfulness around representation, then the folks sharing leadership will inevitable end up on their own personal journey of who they should offer that leadership to. This allows the work to happen more in the headspace of the person doing the work. They will have more ownership of the conclusions drawn. Combined with short rotations of leadership, there’s many surfaces and transition points to learn from. Community projects aren’t companies – you can only once appoint founders, so that decision bears great importance, but if the person stewarding an initiative changes often, then each change is an opportunity for each group to recalibrate on who that culture values seeing in a leadership position.

2019-05-18 🔗

The complexity video that I linked yesterday talked about the steps that ants follow to make decisions on placing a new nest: info-gathering, evaluation, deliberation, consensus-building, choice, implementation.

Curious whether there are analogies to how people engage in individual-to-individual collaborations (person offering support to person), but also more complex collabs. As in, up through more advanced forms like individual-to-group (person offering support to other group), or even group-to-group collabs (group offering support to another group). This last one is understandably the most complicated way to work together, because knowing self and other as a group is so much harder and less intuitive for our individual sensibilities to navigate.

2019-05-17 🔗

I’m inclined to see the world through a lens of complexity science, where high-level patterns at one scale (e.g., cell death in biological systems) can perhaps teach us things about patterns that will fare best at other scales (e.g., governance systems in society).

I’ve been slightly disconcerted by the fact that centralization seems to “win” in most every biological system – most every creature that gets to a certain level of complexity, also develops central nervous systems. The skin cell of the finger becomes subservient to the brain cell. What might this say about authorianism vs democracy?

The optimist in me wants to believe that maybe, if the universe favours centralization, then maybe this could be the force that humanity rallies against. Because we’re good at raging against things. It seems to be part of how humans operate, and work together. And so we often rage against one another. But maybe, we could create a shared understanding that the very fabric and tendencies of the universe are conspiring against some shared democratic ideals. And so then we could just rage against that non-human enemy, the entropic laws of the universe itself. What a perpetual underdog story that would be. It could perhaps occupy our attention and energy for a long time.

But anyhow, today, I realized a bit of a catch in some of my thinking. I often look at the patterns in the world around me, and try to learn from them. So I’d look at how brains (one example of a network highly saturated in activity) seem to have evolved a need for sleep, a period of rest. And then I might wonder what that might teach us about how our comtemporary society (another network of nodes that is increasingly saturated with signal) might be begging for similar interventions – periods of rest from the high activity, for re-organization.

Or I might look at how we see the relationship between structure and anarchy in living symbiotic systems that are animal bodies – the hierarchy of the traditionally conceived “host” animal, and the anarchic gut biome that plays such an important role in processing raw components from the wider world. What might that teach us about the role of chaotic spaces adjacent to government? (Hat-tip to MH for a conversation that sparked those particular thoughts.)

I believe that these comparisons aren’t simply analogy, but rather, they’re examples of multiple systems converging on similar outcomes due to the shared properties of networks more generally. Each network operates and is built from different substrates, different components, but there are common underlying dynamics at play. This is what I understand from complexity science.

And this is particularly important in our current situation. We are now running network experiments on global scales. We don’t necessarily have, in our systems, the capacity to run a million iterations of the system. Ecosystems could run millions of cycles among populations of individuals. Bacterial biomes could run many cycles. Some parts of human culture have had the space to run a scant few experiments, as cultures rise and fall. But we’re getting to a place we must learn from other layers of the network, because we don’t have space or time to run the experiments to completion at the scale we’ve arrived, at the global scale.

But anyhow, it occured to me today that perhaps there’s something I should be wary of while looking to learn from the patterns that have “won” at lower levels. Because those patterns are often ones that emerged in networks where hierarchy prevailed. So I should be cautious not to look too much to them for my learning. Because perhaps some of them are patterns of appeasement to hierarchy. Perhaps some of them represent ways to survive in the cracks between towering structures of central control, rather than an imagining of what might be possible if we exercised human-scale democratic values at our level.

2019-05-16 🔗

(Thinking back on the intuitions involved in our conscious experiences) Maybe the evolved trait that is consciousness, is more about how we feel the sensation of the ride. Like a surfer’s developed instincts, honed for riding waves. We can carve that wave, and enjoy the simple thrill of navigating it, but the major aspects of the experience – the orientation, the shore as ultimate destination – is a force beyond our control.

2019-05-15 🔗

(Building on a prior thought about what underlying process might be getting hijacked in the “bad” scenarios of “fast” and “slow” AI, ie. conventional general AI and corporations, respectively.)

So what if “synchrony” is the thing that all our human intuitions have evolved to select for. Our laughter, our sense of belonging, our falling in love, our enjoyment of engaged conversations. Perhaps these all increase the synchrony of our collective endeavour of humanity. And in that, a process of converting an “other” into a “self” – something that lights up our mirror neurons, like watching a sports team (your sports team) working together on the other side of the airwaves, or speaking across the couch to a lover, or even across the corpus callosum to a self that’s lucky enough to already be in agreement.

We subjectively feel these human experiences as something more, but perhaps the universe fundamentally understands them only as a collective of matter, increasing synchrony with itself.

Ok, so if this is true, then what are the dangers? How can these intuitions that largely serve us well, that help us to create synchrony with our fellow human actors – how can this be hijacked?

I’m starting to wonder if that is what feels do dangerous about corporations. These are deeply inhumane creatures, operating largely with very different motives. Yes, we’ve encoded these motives in laws at our scale. But what emerges is perhaps above us. We have allowed these things to optimize according to rules that, if operating at a human level, would be undestood as psychopathic and anti-human: Pure profit maximization is condoned.

And that’s nothing new, but what is alarming is how these things can start to run with a life of their own, and seem to use humans as “peripherals” of sorts. They deploy people with certain skills and motivations that are perhaps altogether human and sincere. I sometimes imagine employees as some sort of sci-fi puppet on a stick, being thrust into our realm. These people are given autonomy, and they often act in good faith, but they are selected by the larger aparatus for this sincerity in which they operate. Their very authentic interactions can run cover for the very inauthentic mechanisms at the core of corporations – the profit maximization.

And back to synchrony. What I worry here is that the corporation has a very different synchrony that it is propogating in the world. It has its own patterns and logic, and they are not ours. They’re building something else. But when persons representing these larger interested interact with people, they feel like they’re increasing the synchrony of human processes. They feel like they’re pursuing very human goals, down in the trenches with us. All the participants experiences and intuitions might even be telling them “YES, we’re doing something pro-human. We’re becoming more aligned. We’re becoming in sync.” But on the backend, where the power is, there’s another order building.

It’s perhaps a little like how we imagine “fast” AI might go awry. If we were to meet one, we might feel it’s connecting with us on the front end. Its brows might furrow in the way that make us feel understood. It might respond in a way that we understand to mean it’s feeling for us, becoming aligned with us. But these are just the movements of servos and the motions of human projection. The servos are not evil – they are not good or bad. They are peripherals. They are serving something behind them.

The octopus is an interesting creature. It has a distributed brain that extends into its arms. It’s central brain gives loose orders, but the limbs actually do some of the work of knowing what to do. It seems to be a common curiosity, to wonder aloud how alien an octopus consciousness might be. But perhaps we already know. Perhaps we’re living in a system not unlike the octopus’. We are appendages of things above us – we process, and integrate, and make decisions. Maybe we can’t know what it’s like to be the octopus, but maybe our navigation of the corporate entities of capitalism tells us a little about what it’s like to be the octopus’ arms.

2019-05-12 🔗

In conversation with my friend SL ages ago, had chance to work through a thought that maybe society is like a board of flickering, disjointed blinky lights, each representing a conscious creature. The acts that we’re engaged in are at their core an act of creating synchrony in that blinking, and holding it as long as possible. So while the board is a proxy, with just one dimension (brightness), we are navigating uncountable other dimensions of potential synchrony. Many of these dimensions are antagonistic. There are countless ways to resolve these systems of equations.

I feel this view is informed by a TED talk I once watched, Do we see reality as it is?. The speaker openned with a story about a beetle that knew to recognize a shape as a mate. That simplified assumption for how to move in the world, it held true for a long time. Until an Australian beer company created a bottle that tricked that beetle’s intuitions. Instead of finding mates, it began choosing the dead-end option of fucking the bottle. And it started to go extinct. Its intuitions, which used to serve it well, were now being hijacked as the environment changed. And while this instance was a change outside its control, humans could theoretically do the same thing within their own environments, sending themselves on a more complex dead-end trajectory.

But I’m curious what underlying process is being hijacked. For both the beetle and for us. What common process is being navigated, that our intuitions are highly tuned to optimize for, but that is being thrown out of whack. And I wonder if it’s sychrony itself. I find this supported by recent research that shows that deep conversation (presumably creating a subjective sense of reward in participants) results in sychrony of brain activity under MRI scans.

So what if the things that we’re moving toward as conscious life (of which “intelligent life” is just a specific subset) is an increase in sychrony with our fellow travellers – the human persons, animals, plants, buildings, internets, and architectures of all sorts. Maybe that’s the thrust of it. A sort of cosmic like-attracts-like of consciousness.

So if that’s the case, then our experience – the things we desire – are just a proxy for the baser need and drive for sychrony. Our senses and intuitions are simply the things we’ve evolved to root out that synchrony. Just like the beetle evolved this attraction to recognize a thing like itself, which implies an underlying synchrony of information and simple concepts within its mind. Perhaps evolved language itself is just our way of seeing other deeper, more nuanced synchrony within the minds of other beings like ourselves.

And these thoughts lead me to the worries. What might we be engaging in that’s like the beetle? What bottle are we fucking? What things are we pursuing away from life, with miscalibrated sensors, seeking sychrony and finding only hollow vessels? I’m still working through this, but I suspect there’s something to be learned about how we can navigate our future “fast AIs” and our contemporary “slow AIs”, the corporate structures we find ourselves navigating amongst as fellow travellers.

2019-04-11 🔗

Thinking about speculative civic sci-fi related to smart cities. Wondering if maybe human culture is best thought of as a probability cloud, not a state machine. Just as the lightbulb was invented in several different places near the same time, a specific arriving of a future is not a singular event, but an potentially inevitable field of moments evoked. And if this consideration of society as a probabibility space is correct, then it’s less of question of are these people doing this good thing or this bad thing, but can they. If they can do the bad thing, then that future is more adjacent.

Or considering through a social physics lens. Through that lens, we think about possibilities through how many hops away an idea is between people. Maybe we can consider the world we want by how many hops away it might be – good or bad – from a desireable or undesirable possible reality.

Metaphor: We’re perhaps moving through the part of the marble tilt maze where the holes run thick and the steel ball drifts lazily across narrow surfaces in parabolic arcs.

2019-02-16 🔗

(Thinking on “Brief History or Everything”) Considering emergence and “holons”, that which are both wholes and parts. Maybe commons-based peer production is favoured because it involves small pieces that can be shuffled and restacked in search of emergent properties.

We are matter that encodes descriptions of itself. We rattle the air between us. We are ball lightning.

2019-02-01 🔗

How might super-intelligent descendants of cold-blooded lizards think of or describe windchill? Compared to us, it’d be so much more dangerous–a killer. How would their culture understand it, and talk about it?

2016-07-26 🔗

I want to build a “crazy” that’s so comprehensive that other people can live in it.