...
Cybernetic systems arrived to the masses with a fanfare. The UNIVAC (universal automatic computer) gained widespread attention during the 1952 U.S. Presidential election when CBS used the UNIVAC to predict the election results between Dwight D. Eisenhower and Adlai Stevenson. Initial skepticism about its generous Eisenhower victory prediction led the network to adjust the prediction to a more modest figure. However, the computer turned out to be correct.
This led to a surge in public awareness, interest, and investment in the nascent industry. The initial framework and ideas behind this “artificial intelligence” led to the famous Dartmouth Conference of 1956 where many of the early leaders in the field, including Claude Shannon the father of Information Theory, declared "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." This bold assertion fueled excitement in both academic and commercial sectors.
The intelligence at work during the early Cybernetic days was tied to a construct or mechanism that was teleological (it had a purpose or goal). The goal was the desired end-state of the machine. The machine would use inputs to generate outputs, these would then feed back (communicate with) the inputs again to adjust accordingly, and finally achieve the desired end state. Think of a thermostat; where the temperature/desired temperature delta guides the input/output mechanism. Almost 70 years later, some of the expected predictions and promises of artificial intelligence are starting to percolate rapidly across the world of business.
During those 70 years, many innovations have occurred, laying a foundation for the powerful development of modern machine learning. Many false starts and dead ends have also occurred. Rich Sutton, in a widely circulated essay, describes some of these “bitter lessons” learned throughout the years as engineers strived to create artificial intelligent capabilities.
We now find ourselves in a very interesting time. What will be the impacts of artificial intelligence? How much of the current buzz is hype and how much is real revolution? Frankly, nobody knows. There are many examples of over-investment in technologies that failed to deliver the promised goods. Conversely, there are also many examples where the investment in nascent technologies was appropriate, if not undershot.
One of the interesting things I’m seeing is not tied to the technology side but to the human use side. I regularly spend time ruminating about how we think, decide, and act. When a new technology/capability confronts us we are good at incremental forecasting, but poor at transformational forecasting. We struggle with replacing a process, method, or workflow. Usually, we will try to make it better - faster, automate, etc…rather than rethink how the proverbial sausage is made. It truly is a very difficult thing to see and overcome this way of thinking. The whole “optimization” paradigm is a powerful force, that has legacy roots as far back as W. Edwards Deming in its modern form and has produced incredible business methodologies tied to Total Quality Management, Six Sigma, et al.
What’s more, the mental model by which we act is even more difficult to dislodge. We adopt verbiage, jargon, categories, and taxonomies that influence the way we think about things. This is a double-edged sword. On one side categorical schemes help us rationalize the world and provide us with the tools to effectively organize and simply work. On the other side, these tools act as strait jackets limiting our mental dexterity. Thus, influencing the way we compare, relate, and communicate - ultimately leading to a limited imagination. This is one of the reasons why startups can quickly seize market share from incumbents. Founders, sometimes naively, attempt to solve a problem in a way that nobody has thought of before or had the gumption to pursue. For many “inexperienced” entrepreneurs their naive optimism is a feature, not a bug, for they have yet to be fossilized into a one-dimensional mode of operating.
What can we do to allow our mind to be more fluid, i.e., help us think differently? Modern neuroscience and artificial intelligence teach us that one of the best approaches is to continuously seek novelty. Novelty has its own feedback mechanisms and accomplishes several cognitive improvements:
novelty activates the midbrain and increases dopamine levels, motivating us to explore
novel experiences tend to create stronger, more vivid memories
novelty can enhance attention
AI research has also discovered the value of novelty for machine learning, and how it can lead to innovation. Documented in their ground-breaking, now cult-classic, “Why Greatness Cannot be Planned,” and the original paper in Evolutionary Computation “Abandoning Objectives,” Ken Stanley and Joel Lehman describe how they stumbled upon serendipitous outcomes by using novelty as a heuristic in machine learning. Succinctly, their research supports the statement that truly innovative outcomes are achieved more regularly through a novel search/discovery process, rather than a deliberately planned one…this harkens back to Rich Sutton’s “The Bitter Lesson”.
One of the models of thought I use is to think in terms of building our brains into a dynamic database with a high degree of variability/diversity...remember the 3 V’s of Big Data - volume, variety, and velocity? Below is a short list of how I exercise novelty in my world to sharpen the saw, perhaps some of these will be useful to you:
Read obscure books (non-bestseller lists); I’m reminded of George Patton’s quip “If everyone is thinking the same then somebody’s not thinking!”
Learn about new industries, they’re all different and they’re all the same ;)
Travel to new destinations, or at least take a different route to common destinations.
Try to regroup things - for example, if you are reviewing a presentation that has 4 distinct categories in a description attempt to regroup it into 2 categories that make sense. Michel Foucault, the famous French philosopher, tried this at scale and across human history….fun stuff! See “The Order of Things”
Create your own categorical schemes. “I must create a system, or be enslaved by another man’s.” - William Blake
Bend time - force yourself to complete something in a ridiculously short amount of time, e.g., if the presentation is due in two weeks, finish it in two hours. This doesn’t mean that you use it as your final product, just flex the speed muscle.
Interrogate the data that you don’t see. It’s easy to ask questions and make conclusions about the data that you see. It’s harder to conjure up data that you don’t have that might add a twist to current assumptions and conclusions.
The call to action is simply to think bigger, think differently, and use novelty as a heuristic to jostle off the strait jacket that may be limiting your imaginative output.
Angel Armendariz
Serendipity Engineer
Comments