Monday, January 25, 2016

From Turing to Musk to Industry 4.0

We are on the cusp of major changes in education, technology and industry owing to forces that we are powerless to overcome.  It’s natural to fear change on a large scale.  But, we shouldn’t be afraid.  We should embrace the change. 

I’m talking about the impact of artificial intelligence (AI) on society and the workplace. Boston Consulting Group (BCG) has labeled the coming paradigm Industry 4.0.  In their studies, nine technologies from the Industrial Internet of Things (IIOT) to Big Data are integrated to change not just the factory floor but also how businesses will change their product and service offerings.

Our challenge is to understand how new technology will change the workplace and how we should educate our children (See “Is the Education WeWant the Education We Need?” and “Don’t Send Your Kids to College”).  Is it a little scary?  Yes, it is.  Some fear that not only will robots take our jobs but also they might take over society.

Big thinkers from Alan Turing to Elon Musk have contemplated technological singularity for decades.  What’s that?  Here’s a simple definition from Wikipedia:

“The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself…”

Many think machines will never outsmart humans.  After all, how can a machine know more than its human programmers?  Rather than debate how we might determine if we have reached singularity, Turing designed a test to take the guesswork out of the evaluation.

The Star Trek series showed us how it might happen as the impatient-with-human-failings Mr. Spock was replaced in the Next Generation by Data, an android with access to all the information in the universe and the irrefutable logic to apply it in any situation. 

The real-life manifestations of this Sci-Fi are everywhere.  But not in robot form.  Fitbit collects data and coaches us to be healthier.  Netflix collects our movie preferences and recommends films for us to watch.  Next we will see self-driving cars that use sensors and software to get us to work more safely than we can on our own.

How will Industry 4.0 change businesses and, by extension, jobs?  Futurists contemplate that product offerings will evolve into services.  Everything-as-a-Service (XaaS) describes how it would look. GeneralMotors recently invested $500M in Lyft, a ride sharing company.  Why? Well, instead of buying a car in the future, you might buy transportation services. 

Does that seem far-fetched?  Well, when ride-sharing services evolve into fleets of on demand self-driving vehicles to take where you want to go when you want to go there, you won’t much care if you own a car.  And, further you won’t care if the self-driving car that picks you up is a Chevy, Ford or Toyota. 

Don’t believe me?  What kind of plane transported you on your last business trip or vacation?  You might remember the airline and whether you had a good experience with their service.  But, it’s unlikely that you remember whether Boeing or Airbus made the plane.  Nor, do you care!

So, what happens to your job in this new world?  BCG concludes that government and industry should work together to “[a]dapt school curricula, training, and university programs and strengthen entrepreneurial approaches to increase the IT-related skills and innovation abilities of the workforce.” (I speculated about this need in “Our Future:Educated People or Just Educated Robots?”)

In other words, we need to change the way we educate our children.  Workers of the future will be required to understand how to interpret data as well as turn a wrench.  Low cost labor will no longer be a global competitive advantage.  The spoils will go to those who those who can provide a high-skilled workforce. 

Politicians on the left and right seek to make us afraid of the foundational changes necessary to support this paradigm shift (free trade agreements, common core, charter schools, immigration, automation of the factory floor).  After all, if you want to get elected, tell people what to be afraid of and who to blame.

But, it’s fair to say that technology has consistently improved our lives for centuries – from steam engines to electrification to car and air travel to modern electronics.  Rather than raising the specter of job losses from automation and free trade agreements, our political and thought leaders should be creating a vision of a future of better schools, better jobs and better lifestyles.



  1. Perfectly stated John. Who will lead in educating the masses as to the opportunities indeed! It would be interesting to see the ethics debates of collateral impact. For example, biological and DNA reprogramming, inclusion of humans with diminished capacity of any kind and the societal integration of these changes from a pscho-social perspective. How will society accommodate the integration and particularly across all age groups. I for one am all for the progress and have less fear of AI out of control. I have more of a concern of the humans who have as a priority and proclivity to gain or maintain power of using AI to the advantage of limiting the less positioned. Thanks again.

    1. I don't think that humans will use AI to limit those less positioned. I think that those less positioned will fall further behind

    2. My research, which Mensa members are welcome to visit my meeting in Chiswick on Thursday 11th February to debate, describes the 'minion' DNA-protein complex serving as the 'chip in the brain'. AI modeled on it would satisfy Turing.
      Michael T Deans

  2. This is HEAVY stuff. Your intellect is showing.


  3. James E. (Jeb) Bowdoin, MS, PMP, CISSP, CSSGB I don't have confidence the human beings can control AI. Google AI just beat a human being at Go - 10 years sooner than anticipated. Once AI reaches a certain stage of development it will take off and evolve at rates that will enable it to far surpass human intelligence in a very short period of time. Kurzweil and many major AI experts think by around 2030 singularity will happen. But maybe even *they* are underestimating AI potential for development and learning. We have weaponized drones and I am reading about current development of autonomous killing machines. SkyNet doesn't sound like science fiction so much anymore . . . I hope the future is like Star Trek, but I think there is at least an equal probability of a dystopian future a la Terminator.