Since emerging as a species we have seen the world through only human eyes. Over the last few decades, we have added satellite imagery to that terrestrial viewpoint. Now, with recent advances in Artificial Intelligence (AI), we are not only able to see more from space but to see the world in new ways too.

One example is “Penny”, a new AI platform that from space can predict median income of an area on Earth. It may even help us make cities smarter than is humanly possible. We’re already using machines to make sense of the world as it is; the possibility before us is that machines help us create a world as it should be and have us question the nature of the thinking behind its design.

Penny is a free tool built using high-resolution imagery from DigitalGlobe, income data from the US census, neural network expertise from Carnegie Mellon and intuitive visualizations from Stamen Design. It's a virtual cityscape (for New York City and St. Louis, so far), where AI has been trained to recognize, with uncanny accuracy, patterns of neighbourhood wealth (trees, parking lots, brownstones and freeways) by correlating census data with satellite imagery.

It does more, though. You don’t just extract information from this tool, but interact with it, using a compelling, human-centred interface. Drop a grove of trees into the middle of Harlem to see the neighbourhood’s virtual income level rise or fall. Watch what happens when the Gateway Arch moves to the St. Louis suburbs. You can try it yourself.

The basic evaluations are logical. More parks commonly indicate higher income; more parking lots, lower. But its results are not always intuitive: if you add two trees to a neighbourhood, you may not see a difference, but the third might be a tipping point. It’s not just about the urban features you add, it’s the features and the context into which they’re placed.

The unique partnership that built Penny comprises not only computer scientists but also artists and designers, taking advantage of their creative vision to make sense of the results that the AI produces. Here are the technical details – but the interface isn’t technical at all. It’s both playful and intellectually provocative.

Penny’s creation was inspired by a single question: “If a machine could create the perfect city, what would that look like from space?” Although it is continually being improved, Penny isn't robust enough to answer that question with confidence – yet. Designing better cities by AI alone remains in the future.

But more intriguing than that one question are the many others raised by working with Penny:

  • What sorts of problems, even aesthetic ones, can we solve by partnering with new kinds of intelligence – and what will it take to get there?
  • Will we bristle at machine-told truths that violate our preconceptions about value or lifestyle?
  • Does human emotion hinder or help our decision-making?
  • What if the art and design we create using AI do not mimic any recognizable or even comprehensible human approach? Is it still art if it’s beyond our appreciation?
  • And fundamentally, what will it mean to be human in an increasingly AI-dominated world?

These questions can be unsettling, as answers from AI dip through the uncanny valley between purely mechanical and utterly familiar. AI is not merely out-of-the-box thinking. It's dismantling the box and rebuilding a non-box using inscrutable methods. It’s outside-the-species thinking.

Currently, AI lets us do what we already can but at a planetary scale. Automated tools can now count the ships in a harbour, cars in a lot or swimming pools across a country. We see individual trees, whales or glaciers. We measure changing shorelines and deserts and ice sheets. Using today’s AI we do the same thing we’ve always done, but faster. The same thing better. The same thing around the clock and around the globe.

But Penny is a new use in principle. It’s not more of the same. This means the challenges to our traditional ways of seeing the world are new too, such as the challenge presented by asking how a machine sees it. Can it recognize and respect the boundaries and symbols that are so meaningful to us? Or might it, like Penny, look where we’ve always been looking and tell us a truth we are unable to see? What if its truth cannot be explained? Does that make it any less true or us any less important?

The lens of AI isn’t just how a cluster of algorithms sees the world, it’s how we see it. The most compelling questions don’t come when that lens is turned outward – at our cities – but when it is turned directly at us.

This blog is part of a series of posts by the World Economic Forum’s Global Future Council on Space Technologies. The series will focus on the future and importance of space for governments, business, society, and the individual.