AI Will Enable the Solving of Superhuman Problems
... through Feedforward Intelligence Tools (FFITs)
Summary
AI (artificial intelligence) is all over the news lately. This is in large part due to last November’s public release of ChatGPT. Though there’s been a lot of coverage, little of it has been optimistic in tone, instead tending toward FUD (Fear, Uncertainty, Doubt). Understandably, many see AI as a dangerous Pandora’s box that maybe should have been kept closed. But raising the lid has forced the public to start coming to terms with serious questions that our technology-dependent society needs to face, sooner rather than later. The sudden realization of urgency has already given AI way more world attention than other serious issues ever get. That’s probably because a perceived threat of super-intelligent robots taking over the planet is the perfect headline grabber. As a result, it’s been a powerful stimulus for much-needed public education and discussion, even at the highest levels. Though the killer robot scenario doesn’t seem immediate, it would be a mistake not to take AI’s potential dangers seriously.
However, ignoring the incredible potential benefits would be a huge mistake as well. In this post, I’ll make the point that as AI rapidly evolves, easily accessible Feedforward Intelligence Tools (FFITs) will become their own industry and assist our thinking skills in much the same way that smartphone apps have affected our information gathering skills. I’ll build up to that in the form of an AI primer, explaining the basic AI concepts in terms of parallels with their biological underpinnings. You’ll see a lot of links for further exploration if you’re really interested.
I hope you’ll find this interesting and informative. Hopefully it will also help in seeing AI through a rational lens. Likes, Shares, and Comments are always appreciated to keep these posts coming!
“Watch the Front Wheels!”
The graphic I put together for my Mumbling Old Man welcome page is a chronological collage of some of the biggest technological influences in my life. Second down from the top left is a picture of my father and his prized 1948 Hillman Minx Mark I Saloon when he bought it in 1954 — the first car on our block in Darlington, UK! We emigrated to the USA in 1958, and I got my own license in 1964. I remember his congratulations and his #1 driving advice, which has served me well for the decades since then:
Always be watching the front wheels of the car next to you. This is the best way to predict what it will do next so you can get out of the way fast and save your life!
Little did he know then that cars of 60 years later (notably my Tesla Model 3) would be able to do this kind of defensive driving automatically! Not only would they be watching the wheels of cars, but they would be predicting the movement of many surrounding cars and performing automatic collision avoidance if necessary. I relate this story as background for introducing the terms feedback and feedforward. These concepts exist in similar ways in both behavioral science and AI. If you have a feel for them, grasping the fundamentals of AI and understanding its possible benefits is far easier.
At the beginning — simple biological feedback
Think of feedback as sensory information about the quality of something you just did or may still be doing. You can use this feedback to improve quality next time or even while you’re currently doing it. When my father drove his Hillman Minx, keeping it centered in the lane was basically a simple, continuous feedback process. As his eyes and brain judged his position in the lane, he’d nudge the steering wheel left if too far to the right and right if too far to the left. This simple, continuous biological feedback loop kept his Hillman Minx centered.
I’m sure that the first moment he ever drove, he had little sense about just how much to adjust the steering in order to smoothly stay in the center. It was probably jerky and erratic because the feedback from his eyes and processing by his brain was at a basic, untrained, reflexive level. There were also other competing feedback loops at work to complicate things. If the Hillman Minx was moving too slowly or stopped, he’d overpress the accelerator; if too fast, he’d release it. If too much speed, he’d slam on the brakes.
From biological feedback — to biological feedforward
Like everyone who first gets behind a wheel, my father’s first attempts were jerky. But quickly and subconsciously, his brain started adding memory of past attempts to predict just how much the steering wheel and accelerator should be moved, resulting in very smooth driving. To put it scientifically, the neuronal networks in his brain had started to add biological feedforward processing to the feedback to improve the smoothness.
So what is feedforward, as compared to feedback? According to Wikipedia,
Feedforward, Behavior and Cognitive Science is a method of teaching and learning that illustrates or indicates a desired future behavior or path to a goal. Feedforward provides information, images, etc. exclusively about what one could do right in the future, often in contrast to what one has done in the past. The feedforward method of teaching and learning is in contrast to its opposite, feedback, concerning human behavior because it focuses on learning in the future, whereas feedback uses information from a past event to provide reflection and the basis for behaving and thinking differently.
To put this in simpler language, a feedback process learns from a past event, and a feedforward process learns to achieve a future goal or behavior by mimicking target examples. Going back to my father’s driving example, simple feedback corrected lane centering error, but in response to previous results. At first, his brain’s focus was more about the immediate position of the car in the lane, with little devoted to what was ahead on the road or what other cars may intend to do. Later, achieving ultimate driving smoothness required more brainpower in the form of neural network processing: longer-range vision, much more pattern recognition by the brain, and predictive trajectory planning for upcoming curves and behavior of other cars. This planning came from memory of previous experiences. Feedforward refers to this predictive planning process at the neuronal level.
How exactly does this feedforward process learn from memorized examples and translate them to steering smoothness? That’s not totally clear, even to neuroscientists. What we do know is that neurons, the primary functional units of the brain and nervous system, communicate with one another throughout the body, enabling feelings, thoughts, and motion. They receive sensory input from the outside world, send motor commands to muscles, and relay electrochemical signals to other neurons. A neuron can receive signals from many neurons through its dendrites (inputs) and send out commands to one or more neurons through its axon and branches (outputs).
A neuron stimulates other neurons by releasing a chemical (acetylcholine) at the end of its axon (output). The release is triggered by electrical activity (electrochemical processing) in its body. Whether or not a neuron triggers depends the amount of stimulation from all its dendrites (inputs) plus the neuron’s internal electrochemical processing algorithm. Though we’re far from understanding exactly how this all works, we can roughly model it as a summing of inputs (chemicals at dendrites) of different weights (chemical amounts) plus other actions (processing) that results in sending a chemical signal to other neurons if an activation threshold is reached.
More than likely, biological feedforward learning is possible because the brain can remember the changing patterns (and patterns of patterns!) of chemical amounts and proportions that are needed to perform a given mental or physical task. If a pattern of weights results in a mental or physical action that’s unsatisfactory, the brain somehow adjusts the chemical amounts (weights) until the result is acceptable. With time, the best weight pattern is probably triggered as soon as a previously learned scenario is detected and by the senses, with little active thinking. As pattern recognition and learning get more refined, this feedforward learning begins to add a predictive quality to thinking and behavior. This is probably because memorized scenarios and patterns are more like movies that static images.
From biological neural processes — to feedback automation — to feedforward AI
Now let’s transition to artificial intelligence. As it turns out, biological feedback and feedforward have somewhat similar parallels in automation and AI. This is no surprise, since human physiology has served as a model for technological innovation throughout history. Before going further, let’s define exactly what’s meant by each term, since they’re often equated or confused. IBM, one of the early pioneers of computer-based intelligence technology, offers some of the clearest definitions:
[Automation is] the application of technology, programs, robotics or processes to achieve outcomes with minimal human input.
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
So in the broadest sense, automation is a subset or type of AI, but its mimicking of human thinking is extremely simplistic or nonexistent. AI mimics human thinking at a far higher, problem solving level. I like renaming the terms to feedback automation and feedforward AI to make the distinction clearer. Here’s some further discussion of each.
Automation relies on one or more sensors (electronic or mechanical) to monitor some particular aspect of a process, so that it can be forced toward a desired value. Think of a simple thermostat controlling a simple furnace that can be either on or off. It’s winter, so you set the thermostat to 70°F. A temperature sensor in the thermostat sends a reading of 65°F (feedback) to the electronics. Because it’s less than 70°F, the thermostat switches the furnace on. Eventually, the sensor reports greater than 70°F, and the thermostat switches the furnace off. The cycle repeats indefinitely, or until the thermostat settings are changed.
To make the parallel with human biology, the temperature sensor crudely mimics the human sense of “I’m too cold!” (feedback) and the thermostat/furnace mimics lighting a fire (behavior). Up until a couple of decades ago, this kind of feedback automation, crudely simulating human senses and behavior, more or less defined automated technology. Automatic elevators sensed floors with simple switches, opened/closed doors with timers, and moved up or down when conditions were right. Refrigerators turned their coolants on and off depending on temperature sensors. Cars began implementing cruise control, using speed sensors that sent simple feedback to accelerators and brakes. All this was simple feedback automation.
Though the idea of by mimicking the brain’s neural networks to achieve artificial intelligence had been introduced by McCullouch and Pitts in 1943, existing technology limited speed of development to a snail’s pace for many decades. On the other hand, the use of feedback automation exploded and became increasingly more sophisticated, especially after the birth of the microprocessor in 1971. As they became smaller and more powerful yet cheaper (Moore’s Law), they found their way into every facet of human life, adding advanced control capability to simple feedback. But the fundamental concept remained feedback automation.
The microprocessor did however trigger an avalanche of hardware, software, and memory technology development, which gave birth to the Information Age in the mid 1980’s. In 1983, invention of the internet made the concept of large-scale networking a reality. In 1989, the birth of the World Wide Web put the Information Age into full gear. Then, development of intuitive, graphics-based browser software like Mozilla and Netscape, coupled with affordable computer hardware, finally gave billions of people the tools they needed to interact and share information like never before.
Then in the early 2000’s came smartphones, smartphone cameras, and social media software. Microprocessors and memory chips continued to become tinier and more powerful; costs continued to plunge. Digitization of the world’s text and image information exploded in intensity as it started to become instantly shareable through smartphones. The information tidal wave first predicted by Bill Gates in 1995 has continued to this day, and it shows no signs of slowing.
As a result, conditions have finally become ripe for rapid development of feedforward AI. Remember our discussion of biological feedforward and human learning? It depends on the existence of lots of examples of acceptable (and unacceptable) knowledge and behavior to model toward. Artificial learning through feedforward AI is similar, except that necessary examples (samples) for the model are digitally stored in computer memory instead of the brain. To roughly simulate dendrites and axons of human neural networks, feedforward AI arranges large arrays of digital inputs, multiplies each by adjustable weight controls, sums the results, and applies activation functions. The resulting outputs are similar to axons and provide inputs to succeeding layers. Layers are arranged and interconnected as needed to accomplish the purposes of the AI system. The outputs of the final layer are probability predictions and control decisions that have fed forward from the inputs and weights. Hence, feedforward AI.
Learning occurs as the feedforward AI system is fed tons of samples of real world data in order to expose it to as many real world variations as possible. As new data samples (inputs and weights) are processed, the resulting output is compared to the desired outcome, and weights are adjusted to minimize the error. This is done by a mathematical method called backpropagation. After enough data samples, stabilized weights represent the most probable results or actions for an input data set.
To make all this really easy to remember, think of the process of refining a cake recipe for your bakery. Using the original recipe (weights), you combine the ingredients (inputs). After it’s been in the oven you taste it (output). Based on how it tastes (error), you adjust the ingredient quantities (weights, backpropogation) and bake another one (new sample) to see how it tastes (error). You repeat this many times until you have the perfect cake, which is now defined by the latest version of the recipe (weights).
You can imagine that the number of training data samples that would be necessary in order to teach a feedforward AI system human knowledge and behavior would be unfathomably large. And, the analogy with biological neural networks is extremely loose, with biological neurons orders of magnitude more sophisticated and adaptable than anything we can create artificially. Simulating the human brain’s intelligence in all subjects and scenarios will probably be impossible for the foreseeable future. But as demonstrated by ChatGPT since November 2022, it’s already possible to blur the line between human and machine thinking behavior for quite a few text-based subjects!
From feedforward AI — to feedforward intelligence tools (FFITs)
The beauty of feedforward AI is that once the giant learning system learns the ideal weight values for particular tasks and scenarios, that set of weights can be put to work in a much smaller target feedforward AI system. The target system can also achieve better results with far fewer lines of computer code than feedback automation ones with conventional code.
Tesla’s upcoming iteration of FSD (Full Self Driving) will be an excellent example of this. Up until now, much of FSD has been feedforward AI, trained on video data gathered during hundreds of millions of miles driven on it to date. But a great deal of it still depends on feedback automation logic like “If the traffic light is red, apply brakes. If the light turns green, press the accelerator.” Recently, Elon Musk demonstrated FSD V12, a version using feedforward AI completely, in a Model 3 drive. He said the noticeably smoother operation was achieved even though 300,000 lines of code had been eliminated!
Feedforward AI development has been so rapid that it’s already taken many of the most popular computer applications to the next level, especially ones involving text or image pattern recognition:
Google Search — for better search result ranking and understanding user queries
Google Assistant, Alexa (Amazon) and Siri (Apple) — for natural language processing, speech recognition, and executing user commands
Facebook — for facial recognition in photos, content recommendation in the news feed, and targeted advertising
Instagram (owned by Facebook) — for content recommendation and filtering inappropriate content
X (formerly Twitter) — for content recommendation, sentiment analysis, and filtering out spam or malicious content
Google Photos — for image recognition, categorization, and search
YouTube (owned by Google) — for video recommendations, understanding video content, and auto-captioning
Amazon — for product recommendations based on user behavior, search query understanding, and fraud detection
Netflix — for content recommendation based on user viewing history
Spotify — for music recommendation and playlist generation
Tesla Autopilot and FSD Beta — for computer vision processing, to identify and track objects around the vehicle and to plan and execute autonomous driving and navigation
Google Translate — for providng translations between multiple languages
ChatGPT — for text command recognition, information retrieval, and generation of human-like text
Much of our existing feedback automation will also undergo a similar conversion to feedforward AI. The paradigm shift to AI will be gigantic, and it will give us new data analysis and control tools of amazing power. The speed of this transition will be limited only by the speed, power and cost of the supercomputers needed to train the feedforward AI.
I predict that creation and distribution of these new feedforward AI intelligence tools will become its own industry, similar to what’s happened with smartphone apps. Their relatively simple coding requirements, and the convenience of distributing intelligence as patterns of weights, will make it possible to breathe feedforward AI into a whole new range of real-world devices, software applications, and thought processes in general. I’m calling these new tools FFITs (feedforward intelligence tools). Examples of this idea can already be seen in ChatGPT’s Plugin Store, where ChatGPT’s intelligence can be customized to different areas of expertise.
From FFITs to solving superhuman problems
The explosion of digital technology and information in the last few decades has been a boon to humanity. Anyone with a desktop computer or smartphone can rapidly access almost any kind of knowledge upon demand. This has had a huge effect on people’s lives in countless ways. Think about how easy it is to access personal information like bank accounts, medical records, investments, calendars, and photos. Think about how easy it is to book flights, make room reservations, and get a ride to the airport. These all used to be incredibly time intensive and inconvenient. In 1980, I had to sneak out of work at MGH in Boston in order to stand in line at the bank to cash a check. And to get money on a long road trip, I had to get a Sunoco Club membership that gave me check cashing privileges at their gas stations. Things certainly have changed!
But this sheer volume of information has brought its own problems — like analysis paralysis, difficulty of separating truth from fiction, social network effects on society, and intrusion of bots acting as humans (to name a few). These always existed to some extent, but the information revolution has magnified them to the point where we feel helpless in trying to prevent them. Our brains are amazing, but they have limitations. They’re weak at detecting patterns and predicting probablity in large sets of information, slow in their logical thinking, and easily fatigued. They have slow and inconsistent memory retrieval. And, they’re weak at removing subjective bias even when it’s really important. To move forward effectively in this sea of data, our brains and thinking will need help.
Just as our mental limitations are coming to light, we find ourselves confronted with giant existential problems. We struggle with how to prevent human accelerated global warming, provide global health, defend against pandemics and cyber threats, fight economic inequality, and how to control AI itself. Up until now, we’ve attempted solutions with fairly simple, reactive strategies. For example, to reduce carbon emissions, governments have established taxes and cap and trade systems to incentivize businesses. To absorb CO2 from the atmosphere, trees have been planted in reforestration efforts. To quash pandemics, vaccines have been rapidly developed. To reduce inflation, federal interest rates have been raised.
Though effective to varying degrees, these strategies have been implemented in response to not only science, but to pressures of many different social forces, both good and bad. And generally, they’ve been based on relatively small sets of data. They may have been good enough strategies to get started, but without more sophisticated ones that can analyze far greater amounts of data and perceive complex patterns, success will be far from guaranteed. The simple feedback strategies of the past will need to be replaced with feedforward AI that can peek further into the future and make more intelligent decisions.
We happen to be at a great moment in history! In the middle of analysis paralysis and growing self-doubt about our mental cabilities to solve giant problems, feedforward AI has burst on the scene, giving us a great deal of hope. As the FFIT industry rapidly develops, these new intelligence tools will give us problem solving capability orders of magnitude better than the limited intelligence we have today. At some point, they will start making a difference in how these superhuman problems are tackled, turning hope into reality.
— Come to think of it, “Watch the front wheels!” was, in fact, an FFIT. Thanks, Dad!
Hi Andy,
Excellent writing and explanations of the different concepts involved in the workings of the forward thinking AI concept. I loved the way you threaded your father through your descriptions of the concepts. Very well done. You are an exceptional individual! Keep up the good work. By the way, I still do not understand how an AI can answer my questions. Is it that my questions have already been asked by someone else or by the AI learning system? Thank you. Ciao.