![](https://crypto4nerd.com/wp-content/uploads/2023/07/0UpJGgb_M_2NBAgCe-1024x536.png)
Consider this picture of a ship
If I asked you to recreate this image, how would you do so?
There are two approaches we could consider.
First, we could simulate this entire environment on a computer. Create an environment, get the correct model of the ship, set up the camera in the environment, fill it up with all sorts of differential equations, let the simulation play out while overclocking your 3090 to get the picture at roughly the right time. This is the programmer’s approach, where everything is simulated on a computer and all the details could be spelled out, if the programmer so wishes, to the particle level.
The second approach is to set up a similar simulation in your backyard, in the real world. You gather some pistons, get some water, get the same model ship, maybe film during a stormy day and move the pistons according to some less complicated set of rules than the differential equations that we used for the computer simulation and then you can make a similar picture with an off-the-shelf camera.
The objective of both approaches is the same: to obtain a picture of this ship on the sea. One is noisy and inexact, while the other is precise. I’ve recently been thinking about how these two approaches relates to the possible hardware we use to run artificial intelligence on.
All of modern artificial intelligence is run on computers, specifically under the Von Neumann Architecture, which are designed to be able to be a practical implementation of Turing complete systems. What this means is that in a nutshell, everything you want to do on these Von Neumann systems has to be programmed explicitly. This is what forms the foundations of computers: it’s their ability to do anything that can be done within a given set of instructions that makes them so powerful.
I was thinking about how this contrasts to our biological computing mechanism: the brain. In principle, you could encapsulate all the information of a human brain in a computer simulation. All the synapses, all the stimuli that arrive at the brain at any given time, if you know about every one of them then you could simulate everything that the human would think.
This would, of course, be extremely computationally expensive. Say you model a single neuron with 1000 compartments (which is an underestimation to their complexity), with one equation describing the change in membrane potential and three extra for representing the Sodium and Potassium channels that govern the flow of ions through the membrane. This gives us at least 4000 differential equations to solve to model a single neuron, which is a simplification of an actual neuron. Multiply this by 85 billion neurons and well, you have a system of differential equations that would be impossible to solve even if you had all the compute power in the world. This doesn’t even account for the behavior between the pre and postsynaptic synapse-dendrite interaction, nor does it account for synaptic plasticity, nor does it even account for the glia’s in the brain, which might have a role in information processing.
Okay, clearly neurons are complex, and the human brain, even more so. And this leads me to the first point about an idea that I call wetware: it’s a computing system that can’t be separated by software and hardware. In traditional computer systems, hardware and software are separated as very distinct things. In fact, Von Neumann systems were created so that such a hardware-software separation could exist, so that the software could be exported to another piece of hardware so the same instructions could also be run on a similar set of inputs to solve the same kind of problems. Although, as we can see, the traditional software-hardware setup runs into problems when attempting to simulate extremely complex objects like the brain. What if we could have a computing system that didn’t undergo this same type of architecture, but could also reach an exponentially bigger number of possible states?
Such a system’s behavior would not only be the product of its initial system and its programming, but also due to the properties of the system itself, possibly the physics or chemical mechanisms that govern it. Relating this back to our ship picture example, the initial state would be a still scene with stationary water and ship, the “programming” of the system would be the pistons that move the water according to a certain way, and the properties of the system that govern its behavior along with its programming would be the physics of the air, gravity, and the fluid mechanics of water and its interactions with the ship. This is a fundamentally different approach to get what we want: a picture that looks like our stormy ship, or, relating this to the artificial intelligence, demands a completely separate system to study the behavior of the brain.
It is important to note that such a system shouldn’t be Turing complete (general purpose computers), because these types of systems are designed to perform a certain number of computations very well. After all, having a system designed specifically to do a certain type of computation or observe a certain kind of phenomenon should be much faster than a general purpose computer.
Such a system also has another property: mortal computation. As in, if we were to create a piece of wetware, the state of it would be mortal. Take the brain, for example. The brain is shaped randomly by genetics as well as environmental experiences. If the person dies, all the information within the brain gets lost forever. Similarly, the information of a wetware system should get lost if the piece of wetware gets destroyed. This is a consequence of the inseparability of wetware into hardware and software. Since the computation is so specific and the state of the wetware evolves stochastically according to a set of rules, the amount of information that it takes to represent it grows so large that it would be infeasible to export the state of the wetware and save it somewhere.
Okay, we have a good idea of this definition of wetware, but are there actually implementations of this idea already?
Well, a similar line of thoughts is what inspired the field of neuromorphic computing: the design of systems that resemble the function of the brain. This field has inspired many projects, like IBM’s TrueNorth, Intel’s Laohi, SpinNNaker by the University of Manchester, all digital neuromorphic systems, and Neurogrid, an analog neuromorphic system developed by Stanford. These systems are great contributions to society, but not exactly the same as wetware as it doesn’t utilize the physics of the system to do computation. In biological brains, the physical and chemical properties of neurons, glia, and cells play a role in computation.
Alternatively, there is evidence to suggest that the optimization of the brain is driven in part to minimize the path of least resistance in terms of energy consumption. This least energy principle is what inspired Boltzmann machines; unsupervised learning models that aim to model the data-generating distribution by minimizing an energy function, originally popularized by Geoffrey Hinton in the 80s.
I predict that artificial intelligence systems will go one of two paths in the future. We could learn to utilize actual biological neurons, grow them in a lab, and have actual neurons allow us to create artificially intelligent systems according to external stimuli. Or, we could synthesize a new material that would be designed into systems that utilize the physical properties of the system to do computation. Both would be under the framework of wetware.
It is unclear whether modern AI research would be swept under the rug by this. After all, most machine learning revolves around finding a local minimum of a parameter space and hoping that it works. The behavior we get out of training these systems is emergent; in other words, we know the rules that govern how they got there but can’t predict them with certainty.
Take Conway’s Game of Life, for example.
The rules of updating the system (like the Game of Life) can be very simple, we have perfect knowledge of the entire system and we still cannot predict the behavior of it in the long run. Machine learning is largely the same in this respect, and so will wetware if it becomes widely used to model intelligence. This is because if we subject a system to physical and biological processes to do computation, it’s unclear what kind of behavior they’ll produce. If such a system would be used to model intelligence instead of computers, then the emergent behavior that we’ve observed in machine learning models might not translate at all to wetware systems.
Wetware is not new in the sense that it hasn’t already been defined yet; it is defined as computer technology in which the brain is linked to artificial systems, or used as a model for artificial systems based on biochemical processes. (i.e. the brain). I only want to provide a more concrete framework around this word and offer speculation as to the potential development around wetware systems, as well as the possible ways that wetware could be developed to further artificial intelligence.