I recently listened to the latest Exponent podcast episode where Ben Thompson and James Allworth discussed artificial intelligence and where we need to focus our collective attention as a society to address the coming changes this technological shift will bring about.

I’ve been thinking about how artificial intelligence1 affects the labor force and realized I don’t have a good enough grasp on the subject to have an informed opinion.

Here’s what I plan to think through to form this opinion:

  1. What has happened?
  2. What is happening now?
  3. What will happen in the future, all else equal?
  4. What do I want to happen in the future?
  5. What needs to change to make this new future happen?

1. What has happened?

If I can’t establish what happened in the past, I can’t effectively predict where things might go in the future.

2. What is happening now?

If I don’t understand what’s happening right now, then I’m not in a good position to make a claim about the future.

3. What will happen in the future, all else equal?

Here’s where things start to get more subjective. Forming an opinion about what I think will happen (if no major changes occur) is critical before forming an opinion about what I want to happen. This is rooted in fact, but there is more interpretation required here than in previous steps.

4. What do I want to happen in the future?

I get to be subjective here, but it becomes more difficult to make my vision come to fruition the further it deviates from where things are headed with the status quo.

5. What needs to change to make this new future happen?

This is actually less subjective than the previous step. I could want anything to happen in the previous step, but here I need to form an opinion about what to do that is both grounded in reality and effective. These first steps toward my vision need to be in the adjacent possible2.

Next steps

I’ll be writing a series of blog posts on these steps as I read more about artificial intelligence, and build toward a more informed opinion.


  1. Artificial intelligence is hard to define, as Ben and James point out in the podcast. There are two categories of artificial intelligence: artificial general intelligence (AGI) and “narrow” artificial intelligence. AGI does not yet exist, but describes a system that could perform any type of cognitive task at least as well as a human could, while “narrow” artificial intelligence describes systems that can learn optimal solutions, but only within a specific domain. Some examples of “narrow” artificial intelligence are Siri, AlphaGo, and Deep Blue. I will write more about this soon, but Ben and James believe that “narrow” artificial intelligence is a major threat to large swaths of jobs in the near future. 

  2. What solutions can be built from the capabilities I already have? The adjacent possible is a concept developed to explain what new molecules can be formed from existing molecules. It works really well to explain any new possible solution to a problems. Solutions that do not yet exist, but are possible to build now, are in the adjacent possible. 

Nassim Nicholas Taleb is writing a new book, Skin In the Game. Taleb discusses the concept of equality in uncertainty in one of the book’s excerpts, and I have some thoughts on how the internet is making equality in uncertainty possible in a way that could not have previously existed.

Information asymmetry in a sale isn’t fair. Taleb shares a few punchy anecdotes that drive this point home. But to what extent must we level the informational playing field?

Taleb shares a debate between two ancient Stoic philosophers, Diogenes and Antipater, about what degree of information an ethical salesperson must share:

Assume a man brought a large shipment of corn from Alexandria to Rhodes, at a time when corn was expensive in Rhodes because of shortage and famine. Suppose that he also knew that many boats had set sail from Alexandria on their way to Rhodes with similar merchandise. Does he have to inform the Rhodians? How can one act honorably or dishonorably in these circumstances?

Each philosopher promoted a valid viewpoint:

Diogenes held that the seller ought to disclose as much as civil law would allow. Antipater believed that everything ought to be disclosed –beyond the law –so that there was nothing that the seller knew that the buyer didn’t know.

In theory, Antipater’s view resonates with me. In practice, I can see how this view could quickly become prohibitive to the seller.

I believe buyers need access to information, coupled with competency to filter and interpret that information, to satisfy Antipater’s threshold of ethical selling.

Very rarely does a situation present itself that is as clear cut as the man with the shipment of corn. The corn seller would be taking advantage of the Rhodians because they did not have access to critical information that would affect their decision to purchase his corn.

I think this last part - access to information - is key, because the internet has fundamentally removed barriers to acquiring most information. While the internet has not removed information asymmetry entirely, it has begun to level the playing field.

Imagine that the Rhodians in Taleb’s story are living in 2017. If the other corn sellers are en route to Rhodes from Alexandria and are sending tweets and posting to Instagram that they’re on their way, the first corn seller could sell without disclosing that more corn is on the way1 because there is no information asymmetry.

When access to information is not readily accessible, or when the buyer is unlikely to be able to interpret this information2, the seller must step in and provide the buyer with this filtered information so each individual shares equality in their uncertainty.

I could see how providing each buyer with this filtered information could be prohibitive for a seller in the past, but now software can now automate most of this process. This automation makes it more possible to be an ethical salesperson in 2017 than in the days of Diogenes and Antipater, because there’s no excuse to withhold relevant information from buyers.

I’m looking forward to reading Skin In the Game. In the meantime, I’ll keep chewing on the concept of equality in uncertainty.


  1. This assumes the Rhodians had access to the internet and were acting rationally. 

  2. Filtering and interpreting information is getting harder every day as more and more information is thrown at us from the internet. Information is essentially a commodity and our ability to process it is now the constraint. 

I prioritized putting together a new website as an excuse to brush up on my understanding of web technology and to create a centralized place for me to write.

I’ll be writing more frequently in 2017 and will be experimenting with different types of posts and content.

Please be patient with me while I settle in.

I recently created a map of the cost of crime in Philadelphia’s neighborhoods: Crime clusters

After calculating these values in R, I needed a way to present the total cost of crime in each neighborhood in a digestible way.

I picked a choropleth map, but didn’t know how to cluster the values. I had two problems: how many clusters should I choose, and how should I determine the different breaks for each cluster?

I discovered the Jenks natural breaks optimization, which nearly answers both questions. The Jenks method determines breaks by reducing the variance within classes and maximizing the variance between classes1.

I say nearly because the Jenks method requires an input for the number of breaks you want to cluster the data into. Fortunately, you can calculate the fit and accuracy of your decision. As the number of breaks increases, the accuracy increases, but the legibility of your map can suffer.

R and the classInt package make it easy to see this relationship so you can test different quantities of breaks. Here’s a screenshot of me testing 4, 6, 8, 10, and 12 breaks: Testing Jenks natural breaks 6, 8, and 10 breaks looked promising, so I tested them and ultimately selected 8 breaks.

Mapbox Studio and Mapbox GL JS made it easy to style these different breaks push the map to the web (I used this guide to learn how to do this).

If you’re interested in calculating the cost of crime for your city, check out the project on GitHub.


  1. This method breaks down when comparing different groups of data (like crime in Philly neighborhoods versus crime in DC neighborhoods), but since I was clustering univariate data, this worked well. 

This episode of Radiolab discusses the contentious use of aerial surveillance to solve crimes in cities like Dayton, Ohio and Juarez, Mexico. Shoutout to Shilpi Kumar for sharing this with me.

Two things jumped out at me during this episode.

First is the distinction that the advantages of such a system are easy to quantify - reducing certain crimes in Dayton by 30% was one such advantage - but the disadvantages are much more difficult to quantify.

This thought wasn’t expanded upon, but I think it’s worth exploring in greater depth. In this specific case, the disadvantages are unknown, but likely very real. How can you quantify the value of privacy? That said, refusing to make a decision due to an externality being unquantifiable seems foolish. There needs to be some attempt to quantify the previously unquantifiable to make a more informed decision possible.

There is a precendent of this being done in a related field. The RAND Corporation released a Cost of Crime calculator that calculates the total social costs of crime in dollars (paper here). While not a perfect measure of the true costs of crime, these values can help law enforcement agencies more objectively prioritize crime prevention1.

If a similar “Cost of Privacy” calculator existed, it would be possible to have a more objective debate about the merits of - in this specific case - aerial surveillance. A quick Google search lead to this report by the RAND Corporation about the costs of heightened security to the UK public. While there is no calculator for the specific security initiatives, there are three scenarios presented with a measure of the public’s tolerance for heightened security at the cost of personal privacy. Each scenario resulted in different tolerances, which suggests the public would support some policy changes that increase security at the cost of privacy but reject others.

Itemizing these different policy changes and scenarios, and calculating the likely societal costs, would help create a more realistic picture of the cost-benefit analysis of aerial surveillance, as well as other proposed security policies.

The second thing that caught my attention was the confirmation that selling new technology to governments is hard. Persistent Surveillance, the company discussed in the episode, has proposals out to over 100 law enforcement agencies around the world, but none of them are willing to purchase due to privacy concerns from the public, despite proving incredible reductions in crime and literally shutting down a drug cartel through use of the system. Persistent Surveillance is now using its system to analyze traffic while they wait to hear back from law enforcement agencies.

In defense of the government agencies, they have a responsibility to serve and protect the people in their jurisdictions. If those people are sufficiently divided on the adoption of a technology with unclear costs to society, then abstaining from adoption can make sense. One of the key lessons here is that it’s often not enough to create an effective solution for a particular problem, especially when government is involved.


  1. A prioritization would likely want to include an assessment of the department’s ability to prevent a crime. It might not make sense to direct all police resources to homicide prevention if there’s a low likelihood that such efforts could prevent the crime, even if it has the highest social cost. This is not an original thought - the crime forecasting product I work on, HunchLab, takes this efficacy metric into account before making forecasts for risky crime areas. 

I’ve been thinking a lot about how I can become better at what I do. Cal Newport has great insights into a concept he calls deep work, which is compared against shallow work.

“Knowledge workers dedicate too much time to shallow work — tasks that almost anyone, with a minimum of training, could accomplish (e-mail replies, logistical planning, tinkering with social media, and so on).”

So what is deep work?

“I argue that we need to spend more time engaged in deep work — cognitively demanding activities that leverage our training to generate rare and valuable results, and that push our abilities to continually improve.”

So why should we concern ourselves with deep work, especially when it sounds difficult?

Newport outlines three benefits:

  1. Continuous improvement of the value of your work output.
  2. An increase in the total quantity of valuable output you produce.
  3. Deeper satisfaction (aka., “passion”) for your work.

The rest of the post outlines how Newport proposes to engage in deep work in a step-by-step process, so I won’t outline it here.

One of the areas that I’m struggling with is determining what to work on. Newport outlines the scope of your deep work as a mission critical step and has some suggestions on how to get this done.

This post and the concept of deep work resonated with me. It’s changing how I think about what I spend my time doing.

“You want to separate the creative process of seeing that the geometry is there for a fork from the editing process of analyzing whether the fork can be made profitable.” – Ward Farnsworth1

I think this quote can be distilled into a more broadly applicable form:

You want to separate the creative process of ideation from the editing process of analyzing whether or not an idea is possible.

Too often I’ve found myself shooting down ideas before they have been given a fair chance to take root and develop. I need to work on this.


Detroit has a density problem. The city is spread out over one hundred thirty-nine square miles with a total population of less than seven hundred thousand. The land areas of San Francisco, Boston and Manhattan fit inside Detroit’s geographic footprint with room to spare. In simpler terms, it’s a very big city with a rapidly declining population. At its zenith in 1950, Detroit contained over one million eight hundred thousand residents. Due to a myriad of factors, most of Detroit’s residents at this time lived in single-family homes. Macroeconomic conditions played a major role in this trend, such as the rise of the automobile as well as national policy incentives such as the home mortgage interest deduction.

High-density living was discouraged at the local level too, with General Motors and other companies purchasing Detroit’s streetcar lines and replacing them with bus lines. While this was not necessarily a nefarious act by General Motors, the bus system afforded residents the ability to live in even lower densities further from the city’s core, as it was substantially cheaper to add more bus routes than to add additional streetcar rail lines. All of these factors contributed to Detroit’s population density decline from a peak of approximately thirteen thousand three hundred fifty persons per square mile in 1950 to less than five thousand seventy persons per square mile in 2014. It is important to note 1950 Detroit is not the ideal model for high-density living. In comparison, Manhattan’s population density in 2014 is well over seventy thousand persons per square mile, over five times denser than Detroit’s maximum population density in 1950.

After establishing Detroit’s low-density from a population perspective it is worth exploring why the City of Detroit is unable to remain solvent. In Michigan, city budgets are funded almost entirely through taxes levied on its residents. These residents fund the city services everyone expects a city to provide. These city services are a necessary responsibility of a municipality. Detroit, despite its declining population, still needs to support the services for a one hundred thirty-nine square mile city. One hundred thirty-nine square miles of physical infrastructure is needed, including sewer infrastructure, water infrastructure, and electrical infrastructure, infrastructural systems in need of substantial upgrades and replacement. One police force is required to service residents living in a city spread out over one hundred thirty nine square miles. One firefighting force is required to service residents living in a city spread out over one hundred thirty nine square miles. One public transportation service is required to transport residents living in a city spread out over one hundred thirty nine square miles. Detroit’s responsibilities are larger than most other great American cities, literally. The city’s population, and therefore its tax base, is shrinking while its obligations to support its residents are largely staying the same.

All of this is an oversimplification, but that’s okay. I purposefully did not discuss the city’s ongoing bankruptcy, blight, racially charged history, violent crime, graduation rate, employment rate, or many other important problems the city faces. At a fundamental level, the City of Detroit needs to figure out how to increase the money flowing into its budget, or how to reducing the obligations it must fulfill. There are two main variables affecting each of these issues. Increasing the money flowing to the city requires increasing the city’s tax base. This could be accomplished by addressing the city’s nearly 50% level of employment, or it could be addressed by adding residents. I believe either effort is futile in the near future if addressed in isolation. All indicators point to Detroit continuing to lose residents and the non-working population of the city not possessing the means to contribute to the current workforce in and around the city. One of the issues facing the non-working population is lack of reliable transportation, required for any full-time job. Therefore reducing the financial liabilities of the city as the only near-term variable to address.

As Detroit (hopefully) nears the end of its bankruptcy, reducing financial liabilities seems to be spoken for. How could the City of Detroit shed any more of its liabilities post-bankruptcy? As discussed previously, the City of Detroit is supporting a geographically gigantic infrastructure.1 If the city filled up with a million more residents overnight, dispersed evenly, the physical infrastructural requirements would largely be the same. The relationship between the costs associated with maintaining a city with seven hundred thousand residents versus one million seven hundred thousand residents is not linear in a one hundred thirty-nine square mile city. In fact, the City of Detroit’s infrastructure allocation is terribly inefficient due to its low population density. If, somehow, the city were able to physically shrink, it could reduce these cost inefficiencies substantially. If, somehow, the city were able to replicate the population density of Manhattan at roughly seventy thousand persons per square mile, the residents of Detroit could fit into a ten square mile area of the city. The City of Detroit is currently managing services spanning nearly fourteen times that geographic area.

It is unrealistic to imagine a Detroit with Manhattan’s density, but a geographically smaller, denser Detroit should be considered. A smaller geographic footprint will result in the opportunity for drastically faster police and firefighter response times. Further, reliable public transportation will be viable for all city residents, opening up access to jobs for many of the city’s population lacking employment. The City of Detroit will be positioned to support its residents with the basic services all city residents are entitled to. Detroit will be positioned to avoid slipping back under the control of a financial manager or back into bankruptcy. Detroit Future City lays out an achievable strategic framework for the transformation of Detroit, much of it based on clustering development in the city. These ideas are not pie-in-the-sky or speculative. The City of Detroit and its residents must consider moving the city towards a much higher population density to position Detroit for long-term viability in the coming decades.


  1. If I’ve learned anything from playing Civilization V, it’s large empires are costly to maintain. The roads, rail, and units necessary to support these “wide” empires are enormous and only sustainable when your empire is generating equally enormous amounts of wealth. 

parislemon

Yesterday, as I landed in a foreign country, I did my normal routine: switched off airplane mode on my phone, waited for signal to kick in, repeat, repeat, repeat. Once I connected, in poured the push notifications, the first of which is usually a text from the foreign carrier I just connected to warning me that I’m roaming and threatening to take my first child for every MB of data used. Yesterday, the message was a little different.

It was actually a text message from my U.S. carrier, Verizon, notifying me to turn data services off or use WiFi to avoid data charges. I thought nothing of this since I had the global data plan already enabled on my phone. Next, in came the foreign carrier text telling me the current take-your-first-child rates: $20.48 per MB of data used. Not even one minute later (I checked the time stamps), a third message came in, this time from Verizon again, alerting me that I’ve “exceeded $50 in global data charges.”

Again, I didn’t think too much of this because I knew my global data plan was enabled. That plan allows you to pay $25 for each 100MB of data usage when traveling abroad — still a rip-off, yes, but a relative steal compared to the aforementioned take-your-first-child rates normally associated with international data roaming. Because I had been in another country a few weeks prior, I thought such a message might just be a residual warning from data usage on that trip.

Nope.

US carriers are not in the business of excellent customer service. T-Mobile is moving in the right direction, but mobile telecom is still an unfriendly industry for consumers.

Carriers can get away with this crap because competition is severely limited, much like the cable television industry. More (real) competition is seemingly impossible in industries like telecom due to the astronomical entry costs. US telecom is an oligopoly, and it’s extremely frustrating.