FREEDOM AND SAFETY
We’re rapidly approaching the era of abundant knowledge - a time when you can know anything you want, anywhere you want, anytime you want. An era of radical transparency.
By 2020, it’s estimated we’ll have 50 billion connected devices, which will generate over 600 zettabytes of information.
The global network of connectivity, drones, and satellites are not only connecting people, they’re also connecting things - devices and sensors to form the Internet of Things and the Internet of Everything.
In this blog, we’ll cover four different levels of the Internet of Things:
Let’s dive in.
In an earlier blog, I discussed the coming age of microsatellite constellations from SpaceX and OneWeb.
OneWeb is working on a constellation of almost 900 satellites, while SpaceX will deploy over 12,000 mini-fridge-sized satellites. Both constellations plan to deploy global 5G internet. But global internet is only a fraction of the potential of microsatellite constellations. Microsatellite constellations equipped with high-resolution cameras are extending the Internet of Things, providing a massive amount of data to help solve the world’s grand challenges.
As of August 2017, there were nearly 1,800 operational satellites in orbit. Of these, 742 are communications satellites, 596 are used for Earth observation, and 108 are used for navigation.
We’re seeing a massive increase in the number of operational satellites as satellites become smaller and launch costs plummet.
Private companies all over the world are building out satellite technology. Planet Labs is a disruptive company using milk carton-sized imaging satellites to help entire industries obtain game-changing data. Planet Labs showcases 175+ satellites in orbit, enabling them to image anywhere on the globe with up to 3.72-meter resolution.
Alternatively, Planet Labs offers a specialized, targeted satellite option called SkySats. Thirteen of these satellites can achieve up to 72-centimeter resolution. SkySats can also capture video, which can be used to extrapolate 3D models. These satellites are built on the same technology that Google deployed to capture crisp 3D image views for Google Maps.
Closer to Earth’s surface (a few hundred feet above our heads) we’re developing an extensive network of autonomous drones that are collecting valuable information for farmers, wind turbine surveyors, financial institutions, and many others.
At CES 2018, the Department of Transportation announced that over 1 million drones were officially registered with the FAA. The FAA predicts that by 2020, 7 million drones will be flying over North America.
While private and commercial drone flight rules remain moderately restrictive, this past October the US announced plans for the federal government to work with companies to start deploying large fleets of drones with more flexible flight restrictions.
As drones become more robust, larger, and more capable, they will generate massive amounts of imaging and sensor data.
A small drone fleet can easily generate 100 terabytes of data per day.
During some of the tragic natural disasters of 2017 (wildfires, hurricanes), drone data collection was invaluable for saving lives, surveying damage, and providing search and rescue operations with crucial footage of hard-to-reach locations. As drones fly more and collect more data, we can use this data in a positive feedback loop to better train autonomous drones.
Speaking of autonomous vehicles… autonomous cars are a big part of the incoming era of abundant knowledge.
Intel predicts that the self-driving car industry will grow to $7 trillion by 2050.
One implication is that these autonomous cars will begin imaging everything surrounding them, all the time. Imagine millions of autonomous cars on the street, each packed with dozens of cameras, LIDAR, and radar “sensing” to help the car navigate.
One key sensor is called a LiDAR, a laser-based technology that builds a 3D map of a car’s surroundings by measuring how long it takes for millions of lasers to bounce off surrounding objects and return to the car. LiDAR market leader Velodyne’s VLS-128 system can obtain up to 9.6 million data points per second. Tesla, unlike Waymo, is avoiding using LiDAR altogether, opting instead for an ultrasonic, radar, and camera approach to autonomous vehicles.
The bottom line is this: when we enter the era of autonomous cars, there will never be a car, pedestrian, accident, or street-side pick-pocket that isn’t being imaged. These cars will record, in detail, an extraordinary abundance of images.
While the world now has more mobile phones than humans, we will soon see the emergence of an even more advanced technology: augmented reality headsets.
Such AR glasses will feature a multitude of forward-looking cameras that image everything at sub-millimeter resolution as you walk about your day. By the end of 2020, our smartphones, AR glasses, watches, medical wearables and smart dust are expected to constitute 50 billion connected devices, hosting a total of 1 trillion sensors.
And just as the number of connected devices is increasing exponentially, the number of sensors per connected device is also increasing exponentially.
So far, sensors on phones have doubled every four years. This means we can expect 160 sensors per mobile device by 2027, and a world jam-packed with near 100 trillion sensors. Sensors that can be accessed and interrogated by your AI to answer almost any question.
I can envision waking up in the morning, putting on my augmented reality contact lenses, and forgetting about them for the rest of the day. While in my eyes, they record every conversation, every person crossing the street, and everything I look at. Drawing from this constant stream of observational data, I’ll be able to train my social graph and preferences into my AI using the collected data sets.
The bottom line is, we are heading towards a future where you can know anything you want, any time you want, anywhere you want. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.
Want to know the average spectral color of women’s blouses on Madison Avenue this morning? Ask it, and your AI can gather the image data and provide you an accurate answer in seconds. If you’re in the fashion business, you can go on to ask whether any recent advertising campaign correlates with the change in blouse color.
Such an abundance of data is what I call “radical transparency,” and it leads to a few interesting conclusions, which are probably the topic of a future blog…
First, that privacy may truly be a thing of the past. And second, that it is harder and harder to do something in secret without leaving a digital trail.