I recently listened to the latest Exponent podcast episode where Ben Thompson and James Allworth discussed artificial intelligence and where we need to focus our collective attention as a society to address the coming changes this technological shift will bring about.
I’ve been thinking about how artificial intelligence1 affects the labor force and realized I don’t have a good enough grasp on the subject to have an informed opinion.
Here’s what I plan to think through to form this opinion:
- What has happened?
- What is happening now?
- What will happen in the future, all else equal?
- What do I want to happen in the future?
- What needs to change to make this new future happen?
1. What has happened?
If I can’t establish what happened in the past, I can’t effectively predict where things might go in the future.
2. What is happening now?
If I don’t understand what’s happening right now, then I’m not in a good position to make a claim about the future.
3. What will happen in the future, all else equal?
Here’s where things start to get more subjective. Forming an opinion about what I think will happen (if no major changes occur) is critical before forming an opinion about what I want to happen. This is rooted in fact, but there is more interpretation required here than in previous steps.
4. What do I want to happen in the future?
I get to be subjective here, but it becomes more difficult to make my vision come to fruition the further it deviates from where things are headed with the status quo.
5. What needs to change to make this new future happen?
This is actually less subjective than the previous step. I could want anything to happen in the previous step, but here I need to form an opinion about what to do that is both grounded in reality and effective. These first steps toward my vision need to be in the adjacent possible2.
I’ll be writing a series of blog posts on these steps as I read more about artificial intelligence, and build toward a more informed opinion.
Artificial intelligence is hard to define, as Ben and James point out in the podcast. There are two categories of artificial intelligence: artificial general intelligence (AGI) and “narrow” artificial intelligence. AGI does not yet exist, but describes a system that could perform any type of cognitive task at least as well as a human could, while “narrow” artificial intelligence describes systems that can learn optimal solutions, but only within a specific domain. Some examples of “narrow” artificial intelligence are Siri, AlphaGo, and Deep Blue. I will write more about this soon, but Ben and James believe that “narrow” artificial intelligence is a major threat to large swaths of jobs in the near future. ↩
What solutions can be built from the capabilities I already have? The adjacent possible is a concept developed to explain what new molecules can be formed from existing molecules. It works really well to explain any new possible solution to a problems. Solutions that do not yet exist, but are possible to build now, are in the adjacent possible. ↩