Planet Pulse

Updates, insights, fun and other musings about the state of the planet.

Ecuador Earthquake Data Available Under Open License

The devastating 7.8 magnitude earthquake that struck Ecuador on Saturday has taken hundreds of lives and displaced thousands. Ordinary citizens, rescue crews and aid workers are doing everything they can on the ground to help survivors, assess damage, and direct relief efforts.

We want to help. Planet’s imagery of the region is now available under an open usage license (CC BY-SA), and is traceable on OpenStreetMap. This data includes pre- and post- quake imagery collected by our Dove and RapidEye satellite constellations. Imagery collected between April 16 and April 20 is currently available.

portoviejo
Portoviejo, Ecuador. Captured by a RapidEye satellite on April 19, 2016

We will continue to release relevant imagery to help first responders on the ground and the community of digital humanitarians, mapping this disaster from afar.

If you’re interested in contributing to relief efforts, you can download our imagery immediately from the planetlabs-ecuador-earthquake-201604 S3 bucket. If you’d like to browse and analyze this imagery data on Planet’s online platform (API or online browser), email disaster-response@planet.com.

Three Years in Space

On this day in 2013, Dove 2 escaped the surly bonds of Earth aboard a Soyuz 2-1A rocket (along with an unsuspecting and ill-fated crew of 45 mice, 8 gerbils and 15 geckos and some friends). The launch, on a sunny day at Baikonur Cosmodrome in Kazakhstan, proceeded perfectly and first contact was made on the first set of passes (Sunday, 21 April 2013) over a rental dish in Palo Alto.

Dove 2’s Soyuz launch

This was a primordial time in the history of Planet Labs. We were named “Cosmogia” then, and the manufacturing line was inside a $100 greenhouse that we dubbed a “clean-enough room”. Dove 2 was a fourth generation Dove (Build 4.5 to be exact), nicknamed “Jillian”, that was going to be our first satellite to enter space. Would our consumer-grade electronics survive the cold, dark, irradiated reality of outer space? Would we even find the satellite after launch? Was Agile Aerospace a pipe dream? But aboard Jillian was a little CPU named “Merlin”, and magic was inevitable.

cosmogia mug
Dove 2 looking glamorous, next to a collector’s edition Cosmogia mug

As our first launch, Dove 2 had the auspicious goals of transmitting a health packet, detumbling, and downloading an image (just one image would do). Dove 2 took almost 1000 pictures, all of them manually scheduled via a fancy spreadsheet and some deft real-time copy/pasting. Two of the images were manually rectified to a basemap.

bad japan One of the first Dove 2 images: Kakegawa, Japan

snow One of the first Dove 2 images: sea ice, in the Gulf of Bothnia

Dove 2 accomplished all of its mission objectives, proved that the fundamental assumptions of Agile Aerospace were sound, and demonstrated Planet’s value ahead of closing the Series A funding round in June 2013.

As Planet’s Vice President of Mission Operations, it’s been exciting and fun to watch Planet grow. Since Dove 2’s launch, we’ve successfully launched 133 satellites, including our newest design: Build 13; our ground station network has expanded from a few rental dishes to our own network of 15 dishes; and we’re now beaming down terabytes of imagery data a day, processing it automatically in our data pipeline, and making it available online.

And Dove 2 started it all.

Illegal Gold Mine Encroaches into Protected Rainforest

From the beginning, Planet’s constellation of Dove satellites has been built around high-frequency imagery with the goal of near-real-time observation of change.

Today, we saw one of the most striking examples of this value proposition. As part of Planet’s Ambassadors Program, analysts at the Amazon Conservation Association used high-frequency Planet imagery to map illegal gold mining in Southern Peru.

Peru’s El Comercio newspaper cited Daniela Pogliani, executive director of the Association for the Conservation of the Amazon Basin on the illegal activity: “The images show an alarming trend of an activity that expands into new areas with terrible effects on the natural heritage.”

Since the price of gold skyrocketed after the global financial crisis, gold mining has been rampant in Peru’s Madre de Dios province. But as mines on land designated for legal extractions have depleted, miners have crossed into the Tambopata National Reserve, which El Comercio calls “one of the most biodiverse forests in the world.”

maap 3
Planet Labs imagery reveals illegal mining within the Tambopata National Reserve (south of the green boundary). Courtesy Amazon Conservation Association

The work of the Amazon Conservation Association is supported by Planet’s Ambassadors Program, and led by Matt Finer, an ecologist at ACA. To learn more, see our Planet Stories post, visit planet.com, and visit the MAAP Project website.

Get Definitive Skytruth with Esoterra

We’ve been hearing for months from YouTube commenters, chat rooms, and paranormal investigators that no suitable skytruthing solution currently addresses their rigorous needs. Well, Planet Labs has built the solution to address that gap.

We call it the Esoterra Browser.

Esoterra is our fully-integratable, cloud-based solution that catalogs and maps unexplained fringe events and areas of unique, supernatural interest. In just one week, we’ve successfully captured and orthorectified over 5 million square kilometers of enigmatic Earth imagery including: haunted amusement parks, mysterious islands, and ancient alien geoglyphs.

“We’re really going after new markets here. Our Doves are imaging everywhere, every day— they have a real knack for being in the right place at the right time. Esoterra’s a game-changer for reality TV producers, and tabloid editors,” writes Tony Campitelli, SVP Marketing, Planet Labs.

And the applications are endless. Unsure if the truth is out there? Well, just pop open Esoterra and take a look at Area 51… hedge funds can gain invaluable economic insights by counting every vehicle parked in this restricted area:
groom-lake-april1-geo
Area 51, Nevada. Captured April 1, 2016

Precision agriculture experts can monitor field stresses caused by pests, disease, and extraterrestrials.
california-fields
Our Esoterra change detection algorithm detects crops under stress in California’s Central Valley. Captured on March 27, 2016

International logistics managers can track vessels in the Atlantic Ocean’s paranormal shipping lanes.
bermuda-fools
Bermuda, Captured on March 23, 2016

Futures traders can discover hidden commodities in the jungle.
Eldorado-final
El Dorado, Mexico. Captured on March 22, 2016

Local governments can track the movements of temperamental Kaijus with timely before and after disaster imagery. (A note: current resolution allows for Class-2 Kaijus and above)
kansai-airport-20160218-geo
Kansai International Airport, Runway B. Captured on March 24, 2016

With Planet’s high resolution imagery, tabloids will no longer need grainy black and white imagery to document mysterious sightings.
Loch-Ness-final
Loch Ness, Scotland. Captured on March 22, 2016

Esoterra Browser is making supernatural Earth imagery visible, actionable, and accessible. To browse Esoterra imagery, visit our online gallery.

Get Imagery Faster: Introducing Our Super-Fast New Web Tiles

The Planet Platform was designed to be web-first. Planet’s imagery can be downloaded and processed locally, but many of our users enjoy viewing imagery data through a browser. To provide a good experience, all imagery on the Planet Platform needs to be available instantly as web tiles for immediate visualization and interaction.

We’re happy to announce that we just launched our new web tile service—this means faster imagery for everyone. With this release, our performance data show a 2x speedup in the average time it takes to serve a tile. The biggest gains are in the higher percentiles: it used to be that the slower tile requests could take over one minute to process. Check out the “before” chart below; it illustrates the average imagery request latency over a day.

Screen Shot 2016-03-24 at 3.05.00 PM

The spikes indicate that the server took a much longer time to process tile requests. The spike on the far right means that it took 59.04s for the tile server to respond.

Now check out this “after” chart:

Screen Shot 2016-03-24 at 3.21.46 PM

This chart is generated from monitoring the new web tiles service over the course of a day. At first glance, request latency looks much higher! But the y-axis measures milliseconds. The blue line is the average request latency and the orange dotted line is the latency in the 95th percentile. This means even requests that are slower than average are not that much slower i.e. no more huge spikes.

So what’s changed? The majority of the speed increase between graphs is from performing part of the computation eagerly instead of on-demand. This is also our first deployment on Elastic Beanstalk which is part of the Amazon Web Services (AWS) suite. Elastic Beanstalk provides some tools which we use for auto-scaling and monitoring the health of the web tiles cluster.

At the moment, the new web tiles server is only serving RapidEye imagery, but we’re looking forward to rolling it out for PlanetScope and Landsat imagery soon.

If you’re a Planet Platform user, you can log in to test out our newer, faster web tile server today. If you’d like to get a Planet Platform account, sign up here.

World Agri-Tech Investment Summit: The Big Takeaways



I’m Ryan, a Planet Labs Agriculture rep—I work to get our satellite imagery into the hands of decision makers in the agriculture industry, whether they work in an office or in the field.

Last week I was in San Francisco attending the World Agri-Tech Investment Summit. The conference explored growing technology trends in the agriculture industry and in the investment community. Check out my rundown of interesting conference takeaways:

The Beginning

First up was the Planet Labs-hosted welcome reception. Almost 200 people kicked off the conference in the Planet Labs “Mothership”.

Screen Shot 2016-03-25 at 9.43.57 AM

We had a blast kicking off the conference Planet-style, with drinks, snacks, satellites, and good company. It was a great start to a fun and fruitful conference.

The Takeaways

1) Keep it simple
Growers want technology that is proven and easy to use. Planet Labs offers a global, authoritative reference layer for farm management decision support. Because we offer the highest frequency & broadest coverage high resolution satellite imagery, a grower knows they can rely on receiving imagery of their land when they need it.

2) Farming is still a relationship business
Many growers rely on trusted advisors to make technology recommendations.

3) Field testing and ROI validation is key to grower adoption
Growers can’t waste time testing new and unproven technologies, they need to focus on what works. Planet is the world’s leading provider of agricultural satellite imagery. We partner with leaders in precision agriculture—companies committed to helping their growers benefit through imagery technology.

4) Machine learning and computer vision will be interesting to watch
There’s no shortage of data; whether that’s coming from the field or remote sensors. There are many new ways to monitor crop health. Machine learning and computer vision is a promising field that can help growers turn massive volume of data into actionable information.

After a week of meetings, workshops and keynotes with some of the world’s ag-tech leaders, it’s clear that that smart farms and precision agriculture are revolutionizing the way we get crops in the field onto your dinner table. It’s an exciting space to be in.

Learn more about satellite imagery’s role in the precision agriculture movement.

Meet Brice Ménard, Planet Labs Ambassador and Scientist-in-Residence

When Planet launched the Ambassador’s Program a few weeks ago, we expected applications from all manner of Earth scientists. And indeed, innovative ideas have been coming in from geologists, cryosphere experts, forest ecologists and others.

But one application was, well, out of this world. Brice Ménard is a professor in the Department of Physics and Astronomy at Johns Hopkins University. In 2014 Brice was awarded the prestigious Packard Fellowship for Science and Engineering. Among other things, the fellowship will support his work to develop novel techniques to estimate the distance of galaxies.

Brice has been working on the Sloan Digital Sky Survey, an effort to map the sphere of the sky which has generated one of the largest astronomical datasets. The similarity with Planet’s dataset for the surface of the Earth was uncanny:

“While astronomers have been looking up, observing the sphere of the sky, orbiting satellites have been pointed downward, attempting to image the sphere of Planet Earth. Interestingly, these two groups of people share many tasks and challenges. They just happen to look in opposite directions.”

Read about about Brice’s work, and his plans as a Scientist in Residence. And watch this, uh, “space” for updates on his work with Planet’s unique dataset.

Twenty Doves Take Flight on Successful OA-6 Launch

Tonight (March 22, 2016) at 11:07pm EDT 20 of our Flock 2e Prime satellites successfully launched into Low Earth Orbit.

Screen Shot 2016-03-22 at 8.38.08 PM The Atlas V rocket launches the Cygnus spacecraft from Kennedy Space Center in Florida. Image: NASA.

The Doves of Planet Labs Flock 2e Prime are now safe on board the Cygnus spacecraft, laterally accelerating their way towards the International Space Station. On Saturday (March 25) astronauts on the space station will “catch” the Cygnus using the Canadarm2 and unload our Doves. If you get a chance, tune into the live grappling sequence on NASA TV—they’re really cool.

Flock 2e Prime will remain on the ISS until late spring 2016, when they’ll be deployed into orbit via Nanoracks deployers.

Screen Shot 2016-03-22 at 8.43.12 PM Team Planet celebrates a successful launch with traditional launch day pancakes

This is an especially important launch for Planet, as Flock 2e Prime will increase our on-orbit collection capacity in both true-color(RGB) and near-infrared bands. Two special Flock 2e Prime Doves are tech demos of our next-generation Dove satellite, never before tested in orbit. Once Flock 2e Prime deploys, more imagery data will be flowing into our online platform than ever before.

We’re excited to see these satellites in action this spring. Stay tuned for Flock 2e Prime deployment updates.

A Hands-On Guide to Color Correction

Imaging the entire planet every day would be easy if Earth was an airless rock, but unfortunately, it’s not. (Actually, even mapping an airless rock is challenging, but bear with me.) Yes, the atmosphere allows us to breathe, and protects us from lethal levels of ultraviolet radiation, yadda yadda yadda, but it makes remote sensing challenging.

Comet_on_24_September_NavCam_720px
Airless bodies, like Comet 67P/Churyumov-Gerasimenko, are relatively easy to take pictures of. Credit: ESA/Rosetta/NAVCAM.

This is especially true for true-color (red, green, blue) imaging, because we know (or think we know) what to expect. Clouds are white, water is blue, forests are green, and deserts are yellow. Or red. Maybe brown. Even black if there happens to be an active volcano nearby.

Gates of the Arctic National Park landscape.
True-color imaging is hard because it’s familiar. We know what many landscapes are supposed to look like from personal experience. National Park Service, via flickr.

The atmosphere scatters light from the sun before it hits the ground (or a cloud, but we don’t care about those at the moment), and then scatters reflected light again on its way back to a sensor. The atmosphere even scatters light back into a camera that didn’t hit anything on the ground at all.

That would be challenging enough, but the atmosphere changes from one place to another (the air above deserts is typically dry, while the air above a forest is usually moist (even when not cloudy) and often filled with tiny aerosol droplets), and over time (a hazy summer day compared to a crisp fall evening).

Smoke from a forest fire hovers over the Sierra Nevada.
This would be so much easier without an atmosphere. And forest fires. (The Sierra Nevada from the air, enroute from Washington DC to San Francisco, CA.)

On top of this, what we see with our eyes is very different from the raw data captured by a scientific instrument or digital camera. Not only do our pupils open and close based on the amount of available light (think about the blinding brightness when you step into the outdoors from inside a dark room on a sunny day), but color and brightness are both relative—they change based on everything else in your field of view.

The checkershadow illusion.
The squares marked A and B are the same shade of gray. Seriously. ©1995, Edward H. Adelson. This image may be reproduced and distributed freely.

Finally, satellite images of the Earth need to be adapted to the dynamic range and color gamut of display technologies (both screen and print), which are quite limited compared to the real world.

Because of this, it’s very difficult to do automatic or universal color corrections. For the very best results, do it by hand.

Color correction is the process of adjusting raw image data so the resulting picture looks realistic. It’s not a process of making a scientifically accurate picture—our visual system is too non-linear, and too mutable.

Conceptually, you can break color correction down into three facets: brightness (the overall level of light in an image), contrast (the relative light levels of adjacent areas in and image), and color balancing (adjusting the overall hue of an image). Brightness and contrast together can be considered tonal adjustments. In practice, these three elements are inextricably linked—changes in one facet affect the others.

That said, how do you change an image of Beijing from this:

Raw unprocessed image.
Looking at Beijing through 100 kilometers of atmosphere.

To this:

Final processed image.
Beijing in color.

Step One

To get started, load up your image in Photoshop, or other image editing program of your choice. (I’m familiar with Photoshop, so it’s easier for me to explain how I work using it as an example rather than something like GIMP. That said, the process should be similar no matter what tool you’re using. Some scientific image processing applications also offer non-linear color enhancements, but they tend to be less interactive and more difficult to use.) The data are stored as 12 bits of information in a 16-bit-file, so Planet images appears very dark, even black, when first opened. To expand the data (originally 4096 values) to fill the entire range available in 16 bits (65,536 values) simply use the levels command:

Image > Adjustments > Levels

Change the number 256 to 16, effectively multiplying every value by 16. The image will likely still be dark (unless it’s primarily clouds or desert), but you will be able to see some features.

levels_palette
The Levels controls. Change “255” to “16”.

Step Two

The next step is to find something white, for a point of reference, and then set the relative maximum levels of the red, green, and blue channels in an RGB image so that the surface that should be white is white. I.e. white should be 255, 255, 255 in a 24-bit (8-bits per pixel) image. Fresh snow usually works best, but puffy clouds, clean salt pans, or a white roof are OK substitutes. If there isn’t a good white point (which is often going to be the case for a clear image) look at other scenes in the strip to see if you can find something suitable. In this case, I found an image a few scenes to the north of Beijing with a nice thick bank of clouds.

near_beijing_cloudy
Clouds are a reasonably good reference for white, especially when they’re not completely saturated.

Go ahead and copy/paste the cloudy image on top of the image you’re working on. Then add a curves Adjustment Layer on top of all the image layers (rearrange layers in the layers palette by clicking and dragging), so you can make changes non-destructively. In other words, you can mess around to your heart’s content and still be able to undo everything with the click of a mouse.

Layer > New Adjustment Layer > Curves

You adjust the variables on the curve from the properties palette, which you get to by double-clicking the icon for the curve in the layers palette. [If none of this makes sense, track down a copy of Adobe Photoshop Classroom in a Book. It is the best introduction to Photoshop I know of.]

The Photoshop curve properties palette.
The Photoshop curve properties palette.

In the curve properties palette there’s a white eyedropper that (conveniently) sets any pixel you click on to white. Select the white eyedropper, then set the sample size to 3 by 3 Average (this will prevent a single outlier from affecting the image disproportionately). Now go find someplace, preferable someplace very bright, in your cloudy or snowy image that should be white, but isn’t, and click on it. Voila!

An important aside: because Planet Labs imagery sometimes saturates you need to find a clump of pixels where none of the three channels is already maxed out—red, green, and blue should all have a value less than 255. Keep Photoshop’s info palette open, it has a readout of the RGB values for the pixels your cursor is over.

Original cloudy image.
The original cloudy image, with Photoshop info window superimposed.

Modified image.
Modified image, after a white point is selected.

Step Three

Now you have an image with white whites, and light areas of the image should appear with the more-or-less correct hue. There’s even a way to check. With the white eyedropper still selected, and your cursor over the image, hold down the option key. This will display only the saturated pixels in an image. Pixels saturated in all three bands are white, only in blue are blue, red and blue are magenta, etc. Here’s what it looks like for our cloudy image:

Image showing clipping
Fully saturated pixels appear white, pixels saturated in blue and red are magenta, etc.

Close, but you can tell by the remaining blue and magenta fringing surrounding the fully-saturated areas that there’s not enough green. Fix this by adjusting the white point of the green band independently of red and blue. (The white point is the open square on the top right of the diagonal white line in the curves palette. The white line is the curve itself. As long as it’s straight you’re making linear adjustments. When you add additional points and bend it, you’re making nonlinear adjustments.)

Pick the green band from the drop-down menu labeled “RGB”. This will reveal a green curve (that starts as a straight line) and a green histogram. Adjusting this curve will modify the green channel alone, which allows you to change color of an image, not just brightness and contrast. Changes that increase the relative amount of green make the image more green (obviously) while changes that decrease green shift the hue of the image towards magenta (red + blue). To adjust the curve, click and drag on it, or click on a point and use the arrow keys to change values one at a time. For most PlanetScope 2 imagery the value of the blue white point will end up more or less midway between the red and green white points.


Step Four

When the whites look good, it’s time to return to the cloud-free image of Beijing itself—just hide (click the eye icon in the layers palette) the layer with the clouds in it. Because the curve adjustments are independent of the pixels they are modifying, they’ll be applied to the Beijing image exactly as they were to the cloudy image.

White-point corrected image.
Beijing, with white point corrections derived from the nearby cloudy image.

The next step is to adjust the overall brightness and increase contrast. Scattered light brightens the entire image, especially shadowed areas, so we need to make the darks darker. Grab the black point (the open square on the lower left end of the curve) and move it to the right. Every pixel with an overall brightness lower than the black point value will now be black (red = 0, green = 0, and blue = 0) and pixels that are brighter than the black point will be scaled accordingly. As you’re moving the black point, look at the histogram displayed underneath the curve: this shows how many pixels there are at each brightness level. Keep moving the point to the right until the slope of the histogram starts to get steep.

Regions without much topographic relief, buildings, dark rocks, or dense vegetation are not going to have any surfaces that are very dark (since there is nothing tall enough to cast a shadow), so don’t be quite as aggressive at setting black points in these cases.

This increases contrast (we now have both black and white pixels in our image, using the full dynamic range of the monitor), but you will still need to adjust the relative brightness of the image to correspond to the non-linear nature of our vision.


Step Five

Click near the center of the curve to add an additional point, then drag it up and to the left, changing the curve from a line into an arch. This expands the difference in brightness between darker pixels, and compresses the difference between brighter pixels—which mimics the way our eyes and brains process changing light levels. Keep moving up and left until the mid tones of the image look balanced.

beijing-needs-black-point
White point plus tonal correction, but no black point correction. This means the shadows are brighter and bluer (plus a little greener) than they should be.

Take a look at the image. It should have light regions that are properly color balanced, the overall contrast should be mostly ok, and there should be contrast throughout the full range of brightness in the image.


Step Six

You’ll likely notice, however, that the dark areas are a lot too blue and a bit too green, and not quite as dark as they should be. This is because the atmosphere scatters blue (short wavelength) light more strongly than red (long wavelength) light. Correct for this by individually adjusting the black points for the blue and green channels. You’ll almost always need to move blue further than green. I always leave a little bit of blue in the darkest shadows, because the shadows we see at ground-level are also blue. I’m not trying to make imagery that’s scientifically accurate—I’m trying to make imagery that looks like the world we interact with every day.

Labeled Photoshop curves palette.
The finished curve for this image.

This should result in an image that looks good, but not always great. Since each individual adjustment influences the entire image, you’ll likely need to go back and tweak the individual settings for each channel, or the overall shape of the curve. It may even help to add a second or third point to the combined RGB curve—just make sure that the slope varies smoothly. Knees or kinks are often visible either as regions of excessive contrast, or washed out areas.

Final, processed image.
The finished image. I’ve also increased the saturation slightly to make colors richer.

If you’re having trouble creating something that looks right, go to Panoramio or flickr and look for some ground photos. They can be an immense help, especially for regions with unfamiliar terrain. Keep in mind that landscape photography is often done near sunset or sunrise (the golden hour), and may be yellower and more saturated than it would be the rest of the day. Google Maps, Apple Maps, Bing, Mapbox, etc. are also good references, but they often break down at high resolution, so don’t take any of them as authoritative—trust your own eyes and experience.

Unfortunately, this method of color correction doesn’t scale too well beyond a single strip of data. We’re using it to enhance individual images for the gallery, and to inform the other techniques we’re experimenting with to craft seamless mosaics.

If you want to give this a try, here are some sample images (with preliminary data) to practice on. (You’ll need to right-click and download the images. 16 bits per pixel TIFFs won’t display in most browsers.)

Please share your results with us on Facebook or Twitter.

20 Doves to Launch on CRS OA-6 Mission

Twenty Dove satellites are currently packed away in their Nanoracks deployers on board the Cygnus spacecraft and awaiting launch. We’re calling these Doves “Flock 2e Prime” as a continuation of the “Flock 2e” series that was launched in a similar manner to the International Space Station in December on the OA-4 mission. The Atlas V rocket will loft the Cygnus into orbit tomorrow (Tuesday, March 22) at 11:05 pm EDT. Cygnus is headed to the International Space Station as part of Orbital ATK’s OA-6 mission.

cygnus scott kelly
Astronaut Scott Kelly captured this image of the Cygnus approaching the Space Station on December 9, 2015. 12 Flock 2e Doves were packed inside. Image: NASA

Flock 2e Prime is comprised of our standard RGB imaging Dove satellites, along with Dove satellites capable of imaging in Near-Infrared + RGB, and also a handful of tech demos of our next-generation Dove satellite design. The Cygnus is also filled with crew supplies, a 3D printer, and the (very cool) Saffire-1 flammability experiment. Learn a little more about what else is on board.

Once in orbit, the Cygnus will “chase down” the Space Station over the course of four days. After berthing with the ISS, Expedition 47 crew will unload Flock 2e Prime and prepare for its deployment later this spring.

If the launch is successful, 133 Doves will have made it into Low Earth Orbit over the course of 3 years.

Watch the launch alongside us tomorrow on the NASA TV Livestream.