Category Archives: Uncategorized


Seeing ultraviolet in the sky

Reading about bee vision a couple years ago started us thinking: How is it possible for bees to know the position of the sun, even on a cloudy day?

The theory at the time was that bees are able to measure the polarization of sunlight and can use that to deduce the position of the sun. Although it seems that light scattering from multiple layers of clouds might muddy the polarization, apparently scientists studying bee behavior think that bees use polarization even on the cloudiest of days. You can measure the polarization of light from the sky yourself using a radial arrangement of polarizing filters or by building a fancy polarization camera.

However, another interesting aspect of bee vision is that bees are able to see ultraviolet light.  This made us wonder: Could bees determine the sun’s position, even on cloudy days, by looking for the strongest source of UV light?

The scattering of UV light by clouds is a complicated phenomenon. Forrest Mims III and others have reported on a counterintuitive effect whereby UV exposure at ground level might be more intense on a cloudy day than on a clear day. We know that shorter wavelengths of light, such as UV, are scattered more by clouds than longer wavelengths of light, such as visible and IR light.  The question is whether this scattering obscures the position of the sun.

Time to measure things. We started with the GUVA-S12SD UV-B sensor breakout available from Adafruit. This sensor provides an analog voltage output that is some fraction of the input voltage: the more UV that falls on the sensor, the more voltage it will output. We knew we wanted to measure the relative UV intensity coming from different quadrants of the sky, to try to triangulate the position of the sun, so we decided to use four sensors.

Wiring four UV-B sensors
Wiring four UV-B sensors

We designed a custom mount for the four sensors that would hold them at a specific geometric relationship to each other, and a friend 3D printed them for us.

Plastic mount for UV sensors
Plastic mount for UV sensors

We mounted the sensors by pushing them through the plastic mount from behind, and used a single screw to hold each sensor in place.  We reused an aluminum project box to hold everything.

Sensors mounted on the box
Sensors mounted on the box

We wanted to continuously log the sensor data throughout the day, so we used the ArduLog-RTC, which is essentially an Arduino UNO that will log data to a microSD card and can timestamp the measurements with a (battery backed-up) real time clock.

All the stuff that goes in the box
All the stuff that goes in the box

We used a 3S LiPO battery with a 3.3V converter wired to the ArduLog-RTC. This also supplied power to the four sensors. I wired the four analog outputs of the sensors to four analog inputs (A0-A3) on the ArduLog-RTC.  We’ve uploaded the basic logging code to github. (If you’re unsure how to program the ArduLog-RTC: we used a serial TTL connection by soldering header pins to the six thru holes on the left side of the board–TX/RX etc — connecting an FTDI cable to those pins, and programming using the Arduino application).

Packaged device
Packaged device

We took the device outdoors and oriented it using a compass so that the sensors pointed roughly NE, SE, SW, NW.  Then we started collecting data.

The logged data looks something like this CSV:

20160325 09:18:37,116,48,62,110
20160325 09:18:47,117,48,62,111
20160325 09:18:57,117,48,62,111

That is, a timestamp followed by the analog voltage measured at each sensor. For scale: the maximum value is dictated by the voltage supplied to the sensors, which would be about 3.3V, corresponding to a measurement of 330. The minimum value is zero when no UV light is falling on the sensor.

Here’s a plot of the data from a single-day’s measurements:

A few interesting things to note from this plot:

  • It appears sens3 and sens0 point substantially more eastward than sens2 and sens3. This is consistent with the actual physical arrangement of the sensors.
  • The peak values for sens0 and sens3 are measured around noon
  • There are some low-intensity measurements at the beginning and end of the day: one guess is that there is no direct sunlight falling on the sensors during these periods (due to shadowing from surrounding trees and structures), so we’re measuring the UV light scattered by the sky during these periods.
  • Other than noise at the beginning and end of the day (possibly caused by obstructions), the measured UV intensity seems to rise and fall smoothly.  This seems to be consistent with a mostly cloudless sky throughout the day.
  • There is a very small UV signal measured as early as 4:30 AM — hours before the 6:56 AM sunrise.  No idea what causes this.
  • The sensors that are facing more northward (sens0 and sens1) measure more intense UV than the two sensors facing more southward.  At first glance this doesn’t make sense, since I would expect southward-facing sensors to detect more intense UV.  I need to verify that sens0 and sens1 are facing NE and NW respectively– if they are, then there might be some odd angle-of-view effect happening here.

Measurements from an intermittently cloudy day

On 20160328 multiple large clouds passed between the detector and the sun, and the effect on the sensors looked like this:
It appears that clouds cause a large dip in intensity, though the relationship between the four detectors remains about the same.

Tracking Solar Position

We can calculate the position of the sun from the measured intensities.

Next Steps

  1. Accurately measure the max voltage supplied by the power supply (so we know what the UV measurement range should be)
  2. Mount the measuring device up high to avoid some shadows caused by obstructions
  3. Measure UV on a substantially cloudy day
  4. Calculate the position of the sun based on the measured relative UV intensities, and match that against the logged time.


mavleash: track and control your drone via 3G cellular

One of the nagging worries we have when flying a drone is that it will, despite our best efforts, fly away into the wild blue yonder.

If you’re lucky, and the drone lands within radio range of your handheld controller, you might be able to find the drone on a map, and go pick it up.  If you’re unlucky, and the drone flies out of range of the controller, you may never see it again.

Recently I decided to hack together a solution for tracking my drones, even if they fly out of range of my ground controller.  Fortunately I live and fly in areas that have good 3G cellular coverage, and the folks at Particle have recently released a very compact 3G cellular data board called the Electron.  The Electron includes a u-blox cellular data modem as well as a low-cost monthly data plan.

Particle also makes a cheaper WiFi version of their board called the Photon.  To avoid burning up my Electron data plan, I decided to start by prototyping using the Photon.  Fortunately the Particle API is nearly identical for both the Photon and the Electron, so I could be assured of a smooth transition from one to the other.

Initial components
Initial components

There are just a few basic components required: a Particle board (the Photon or Electron), a GPS receiver, and mavlink-compatible autopilot. The GPS receiver is required to enable the autopilot to know where it is. For the autopilot I used a trusty Pixhawk v2.4.6, which is available in many different flavors and form factors from numerous vendors on eBay and elsewhere.

What’s mavlink?

The mavlink protocol is a communications protocol used by numerous drone autopilots and accessories.  A typical use is that an Arducopter or similar autopilot will send its telemetry in mavlink format via a telemetry radio (such as the very common SiK telemetry radio) to a ground control station (GCS) such as QGroundControl.  In this case, I setup my Pixhawk using QGroundControl,  including flashing the Pixhawk with the latest Stable release of the PX4 autopilot software.


The wiring between the Particle board and the Pixhawk was straightforward.  I took the serial UART output from the Pixhawk’s TELEM1 port and built a cable to connect to the Particle’s Serial1 port (which attaches to the RX and TX pins) as well as the Particle’s VIN and GND pins. Fortunately the TELEM1 port provides +5V power that is perfect for the Particle’s voltage input.

Photon proto wiring
Photon proto wiring

Tip: I accidentally reversed the TX and RX pins multiple times while working on this project.  If you find that you are not apparently receiving any mavlink input to the Particle application, try reversing the TX and RX lines from the autopilot.


For reference, I’ve made the mavleash code available on github.

Because I was eventually going to send telemetry data and commands via 3G cellular connection, and I was paying by the megabyte for data, I knew I didn’t want to send raw mavlink from the autopilot to the Particle cloud.  This would work fine with the Photon’s wifi connection, but it was expensive and overkill for the Electron’s 3G connection.  Instead I decided to listen for a few key mavlink messages and forward summary data to the Particle cloud on a periodic timer.

I started by generating the common mavlink 1.0 headers using the mavgenerate tool.  Then I created a new Particle app project and quickly discovered that, by default, Particle projects don’t currently support descending into the directory hierarchy that mavlink requires.  So I needed to create a particle.include file that would pull in all of the mavlink headers as well as the sample app files.  This would limit me to using the Particle command line tools for builds instead of being able to use the otherwise great Particle Build web-based IDE.

Then I built a serial port reader (readMavlinkMsg) and mavlink message handler (handleMavlinkMsg).  The serial reader reads data from the Serial1 port (which is attached to the TX and RX pins on the Electron and Photon), and the message handler handles a few key mavlink messages, ignoring all others.  I won’t dive into why I chose these particular mavlink messages, but briefly I wanted to obtain sufficient data to find a drone if it flew away.

I will also point at that I used one of the mavlink messages to limit the rate at which I send data to the Particle cloud.  Mavlink provides a handy MAVLINK_MSG_ID_HEARTBEAT message that is sent by the autopilot at almost exactly 1Hz (once per second).  By counting the number of heartbeat messages I received on the Particle board I could decide how often to send updates to the ground– in this case I decided that a bit less than 0.3Hz was adequate.

Sending Data to the Particle Cloud

At first I experimented with using the Particle.variable methods to make individual bits of telemetry available in the Particle cloud (you’ll see this in registerCloudVariables).  This seems like a nice API for making one or two variables available for reading in the cloud — however, there’s not much device-side control over when these variables are published to the cloud, and there is a nontrivial amount of data overhead to each publication.  Also, reading these variables from a cloud client side requires RESTful polling (see eg the various setInterval functions in the mavleash.html file).

Next I experimented with Particle.publish, which allows you to publish a named event along with some payload data.  The advantage of this is that events can be published at a moment of your choosing, and you can carefully control what payload data is sent.   The first payload data format I tried was simple JSON (see publishStateAsJson ), where summary telemetry data was sent as name-value pairs. One big advantage of using JSON as the payload format is that it’s human-readable.  Another big advantage is that there are numerous JSON parsing libraries available for decoding the JSON (it’s almost trivial to read into javascript, as you can see from the statejson_eventSource callback function).

Publishing with Particle.publish and JSON worked fine, but I still wanted my payloads to be smaller. So next I moved to using CSV (comma separated values) as the payload format (see publishStateAsCSV).  The advantage and disadvantage of CSV is that there are no variable/parameter names– it’s just an ordered series of values, and you need to always send every parameter so that the order is maintained.  In practice, with just seven parameters sent in the payload, I saw about a 40% savings in payload size.

Reading Data from the Particle Cloud

As a proof-of-concept, I created a mavleash.html page that accessed the Particle event API using javascript.  I’ve left the code with both JSON and CSV readers enabled, and commented-out the Particle.variable polling code, so you can see how those can be used.

For flight testing, I loaded the mavleash.html file locally onto my iPhone using Coda, and entered my Electron’s specific device ID and access token.

iOS Mobile client
iOS Mobile client

In addition I created an IFTTT recipe to log events sent from the Electron in a Google Drive spreadsheet:  Each time the Electron publishes its position (as a ‘statecsv’ event), it creates another row in a spreadsheet.  This can help you figure out where your drone went even if you don’t have mavleash loaded on your iPhone.

Sending Commands

In case of flyaway emergency, I wanted to be able to send a few commands to the drone from the mavleash app:

  • Land.  Cause the drone to land now, wherever it is
  • RTL.  Cause the drone to return to where it was launched (un-flyaway)
  • Reset. Cause the Electron itself to reset, for testing purposes.

I implemented support for this in the handleCommand methods.  Since Particle.function only supports four functions currently,  I decided to make the actual command a string parameter. (See landCommand for an example of sending the commands using the cloud API.)

Mounting on the Drone

Here’s an overview of the components you need to mount on the drone:

Electron components to mount
Electron components to mount

Your drone might be different.

For flight tests, I mounted the required components on an F450 quadcopter:

Electron mounted on F450 arm
Electron mounted on F450 arm

There are probably better ways to mount the Electron, but Blenderm tape works just fine. In this configuration I mounted the Electron’s taoglas antenna with the backing tape attached to one of the quad’s ESCs– this means the antenna traces were facing the ground.

From testing I also decided that I didn’t need to use the external battery supplied with the Electron– it appeared that the power supplied by the Pixhawk’s +5V line was sufficient.

Flight Testing

The next step was to try this in the real world.  I took the F450 out to a local flying field:

Electron in field
Electron in field

After powering up, I waited a few minutes for the Electron to connect to the 3G network– however it never obtained a connection.  Thinking that this might be due to the Electron’s antenna facing the ground, I held the quad aloft for about a minute after power-cycling the quad, and sure enough the Electron started “breathing cyan”, indicating it had a 3G connection.

The first flight I placed the vehicle into position hold mode, then moved it around a bit as I watched the position indicator on the mavleash app. Sure enough, the tracking was working.

The next few flights, I tested sending commands to the vehicle.  I flew the vehicle in position hold mode, turned off my RC transmitter (to simulate going out-of-range during a flyaway) then sent a series of commands to the vehicle using the buttons on the mavleash app:

  • Land.  This worked perfectly.  Within a second after pressing the Land button the mavleash.html app, the vehicle started an automated landing.
  • RTL.  This did nothing, as my ground tests indicated it might.  The “native PX4” software stack I installed on the Pixhawk doesn’t have a simple RTL handler, and I think that in position hold (POSCTL) mode, it disallows RTL.  Arducopter apparently does support the simpler RTL command, but I haven’t tried it.
  • Reset.  I tried remotely resetting the Electron itself, and this worked as well. Telemetry stopped flowing from the vehicle for about a minute, then started flowing again.

Next Steps

Some folks may wish to run both mavleash as well as a more traditional telemetry radio such as the SiK 915MHz/433MHz, or monitor their mavlink telemetry using MinimOSD.
You can readily split the output line of the autopilot mavlink port and supply it to multiple readers (SiK, MinimOSD, and Electron).  This will provide you with a telemetry stream from the autopilot.  You can configure the wiring so that one of the radios can write commands back to the autopilot’s input pin. For example you could use the MAVBoard Lite to simplify the wiring.
However, if you wish to send commands to the autopilot from multiple sources, this is tricky.  One option would be to use a digital multiplexer (eg NC7SZ157P6X) to switch between, say, the SiK and the Electron.  You could output a mux control signal from one of the Electron’s digital output pins so that when the Electron is sending commands, it owns the autopilot’s input pin.

Precision-guided landing and takeoff of multirotor/VTOL drones

We’ve been working on precision guidance of multirotor/vtol drones for fully automatic takeoff and landing scenarios.  Here’s a glimpse at a recent simulation of one guidance method.

This is an orthographic projection of the drone-landing platform system viewed from above.  The drone is visible as the black-outline circle with colored dots in the center.  The landing platform is visible as a green colored square with numbered and colored dots at the corners.

The large red and blue rectangles represent a projection angle between the drone and the landing platform.  As the drone descends, the angle shrinks.

In this simulation we randomly position the drone somewhere near the landing platform and then use an algorithm to guide the drone to a precise landing.

The platform in this model is scaled to about 1m sides.

Automated flight over Papaikou Orchard with X525 Quadcopter

I’ve been experimenting with Ardupilot’s automated quadcopter flight modes and the andropilot app for android devices. Today I flew a simple loop over our pasture with a few waypoints.

Here’s a video of automated flight over Papaikou Orchardand the flight path traced out in Google Maps.

You can see the main town of Papaikou (about 9 miles north of Hilo, on the Big Island of Hawaii), and when the quad returns for a landing you can see giant Mauna Kea in the distance.