IMG_3409

Seeing ultraviolet in the sky

Reading about bee vision a couple years ago started us thinking: How is it possible for bees to know the position of the sun, even on a cloudy day?

The theory at the time was that bees are able to measure the polarization of sunlight and can use that to deduce the position of the sun. Although it seems that light scattering from multiple layers of clouds might muddy the polarization, apparently scientists studying bee behavior think that bees use polarization even on the cloudiest of days. You can measure the polarization of light from the sky yourself using a radial arrangement of polarizing filters or by building a fancy polarization camera.

However, another interesting aspect of bee vision is that bees are able to see ultraviolet light.  This made us wonder: Could bees determine the sun’s position, even on cloudy days, by looking for the strongest source of UV light?

The scattering of UV light by clouds is a complicated phenomenon. Forrest Mims III and others have reported on a counterintuitive effect whereby UV exposure at ground level might be more intense on a cloudy day than on a clear day. We know that shorter wavelengths of light, such as UV, are scattered more by clouds than longer wavelengths of light, such as visible and IR light.  The question is whether this scattering obscures the position of the sun.

Time to measure things. We started with the GUVA-S12SD UV-B sensor breakout available from Adafruit. This sensor provides an analog voltage output that is some fraction of the input voltage: the more UV that falls on the sensor, the more voltage it will output. We knew we wanted to measure the relative UV intensity coming from different quadrants of the sky, to try to triangulate the position of the sun, so we decided to use four sensors.

Wiring four UV-B sensors
Wiring four UV-B sensors

We designed a custom mount for the four sensors that would hold them at a specific geometric relationship to each other, and a friend 3D printed them for us.

Plastic mount for UV sensors
Plastic mount for UV sensors

We mounted the sensors by pushing them through the plastic mount from behind, and used a single screw to hold each sensor in place.  We reused an aluminum project box to hold everything.

Sensors mounted on the box
Sensors mounted on the box

We wanted to continuously log the sensor data throughout the day, so we used the ArduLog-RTC, which is essentially an Arduino UNO that will log data to a microSD card and can timestamp the measurements with a (battery backed-up) real time clock.

All the stuff that goes in the box
All the stuff that goes in the box

We used a 3S LiPO battery with a 3.3V converter wired to the ArduLog-RTC. This also supplied power to the four sensors. I wired the four analog outputs of the sensors to four analog inputs (A0-A3) on the ArduLog-RTC.  We’ve uploaded the basic logging code to github. (If you’re unsure how to program the ArduLog-RTC: we used a serial TTL connection by soldering header pins to the six thru holes on the left side of the board–TX/RX etc — connecting an FTDI cable to those pins, and programming using the Arduino application).

Packaged device
Packaged device

We took the device outdoors and oriented it using a compass so that the sensors pointed roughly NE, SE, SW, NW.  Then we started collecting data.

The logged data looks something like this CSV:

20160325 09:18:37,116,48,62,110
20160325 09:18:47,117,48,62,111
20160325 09:18:57,117,48,62,111

That is, a timestamp followed by the analog voltage measured at each sensor. For scale: the maximum value is dictated by the voltage supplied to the sensors, which would be about 3.3V, corresponding to a measurement of 330. The minimum value is zero when no UV light is falling on the sensor.

Here’s a plot of the data from a single-day’s measurements:

A few interesting things to note from this plot:

  • It appears sens3 and sens0 point substantially more eastward than sens2 and sens3. This is consistent with the actual physical arrangement of the sensors.
  • The peak values for sens0 and sens3 are measured around noon
  • There are some low-intensity measurements at the beginning and end of the day: one guess is that there is no direct sunlight falling on the sensors during these periods (due to shadowing from surrounding trees and structures), so we’re measuring the UV light scattered by the sky during these periods.
  • Other than noise at the beginning and end of the day (possibly caused by obstructions), the measured UV intensity seems to rise and fall smoothly.  This seems to be consistent with a mostly cloudless sky throughout the day.
  • There is a very small UV signal measured as early as 4:30 AM — hours before the 6:56 AM sunrise.  No idea what causes this.
  • The sensors that are facing more northward (sens0 and sens1) measure more intense UV than the two sensors facing more southward.  At first glance this doesn’t make sense, since I would expect southward-facing sensors to detect more intense UV.  I need to verify that sens0 and sens1 are facing NE and NW respectively– if they are, then there might be some odd angle-of-view effect happening here.

Measurements from an intermittently cloudy day

On 20160328 multiple large clouds passed between the detector and the sun, and the effect on the sensors looked like this:
It appears that clouds cause a large dip in intensity, though the relationship between the four detectors remains about the same.

Tracking Solar Position

We can calculate the position of the sun from the measured intensities.

Next Steps

  1. Accurately measure the max voltage supplied by the power supply (so we know what the UV measurement range should be)
  2. Mount the measuring device up high to avoid some shadows caused by obstructions
  3. Measure UV on a substantially cloudy day
  4. Calculate the position of the sun based on the measured relative UV intensities, and match that against the logged time.

 

mavleash: track and control your drone via 3G cellular

One of the nagging worries we have when flying a drone is that it will, despite our best efforts, fly away into the wild blue yonder.

If you’re lucky, and the drone lands within radio range of your handheld controller, you might be able to find the drone on a map, and go pick it up.  If you’re unlucky, and the drone flies out of range of the controller, you may never see it again.

Recently I decided to hack together a solution for tracking my drones, even if they fly out of range of my ground controller.  Fortunately I live and fly in areas that have good 3G cellular coverage, and the folks at Particle have recently released a very compact 3G cellular data board called the Electron.  The Electron includes a u-blox cellular data modem as well as a low-cost monthly data plan.

Particle also makes a cheaper WiFi version of their board called the Photon.  To avoid burning up my Electron data plan, I decided to start by prototyping using the Photon.  Fortunately the Particle API is nearly identical for both the Photon and the Electron, so I could be assured of a smooth transition from one to the other.

Initial components
Initial components

There are just a few basic components required: a Particle board (the Photon or Electron), a GPS receiver, and mavlink-compatible autopilot. The GPS receiver is required to enable the autopilot to know where it is. For the autopilot I used a trusty Pixhawk v2.4.6, which is available in many different flavors and form factors from numerous vendors on eBay and elsewhere.

What’s mavlink?

The mavlink protocol is a communications protocol used by numerous drone autopilots and accessories.  A typical use is that an Arducopter or similar autopilot will send its telemetry in mavlink format via a telemetry radio (such as the very common SiK telemetry radio) to a ground control station (GCS) such as QGroundControl.  In this case, I setup my Pixhawk using QGroundControl,  including flashing the Pixhawk with the latest Stable release of the PX4 autopilot software.

Wiring

The wiring between the Particle board and the Pixhawk was straightforward.  I took the serial UART output from the Pixhawk’s TELEM1 port and built a cable to connect to the Particle’s Serial1 port (which attaches to the RX and TX pins) as well as the Particle’s VIN and GND pins. Fortunately the TELEM1 port provides +5V power that is perfect for the Particle’s voltage input.

Photon proto wiring
Photon proto wiring

Tip: I accidentally reversed the TX and RX pins multiple times while working on this project.  If you find that you are not apparently receiving any mavlink input to the Particle application, try reversing the TX and RX lines from the autopilot.

Coding

For reference, I’ve made the mavleash code available on github.

Because I was eventually going to send telemetry data and commands via 3G cellular connection, and I was paying by the megabyte for data, I knew I didn’t want to send raw mavlink from the autopilot to the Particle cloud.  This would work fine with the Photon’s wifi connection, but it was expensive and overkill for the Electron’s 3G connection.  Instead I decided to listen for a few key mavlink messages and forward summary data to the Particle cloud on a periodic timer.

I started by generating the common mavlink 1.0 headers using the mavgenerate tool.  Then I created a new Particle app project and quickly discovered that, by default, Particle projects don’t currently support descending into the directory hierarchy that mavlink requires.  So I needed to create a particle.include file that would pull in all of the mavlink headers as well as the sample app files.  This would limit me to using the Particle command line tools for builds instead of being able to use the otherwise great Particle Build web-based IDE.

Then I built a serial port reader (readMavlinkMsg) and mavlink message handler (handleMavlinkMsg).  The serial reader reads data from the Serial1 port (which is attached to the TX and RX pins on the Electron and Photon), and the message handler handles a few key mavlink messages, ignoring all others.  I won’t dive into why I chose these particular mavlink messages, but briefly I wanted to obtain sufficient data to find a drone if it flew away.

I will also point at that I used one of the mavlink messages to limit the rate at which I send data to the Particle cloud.  Mavlink provides a handy MAVLINK_MSG_ID_HEARTBEAT message that is sent by the autopilot at almost exactly 1Hz (once per second).  By counting the number of heartbeat messages I received on the Particle board I could decide how often to send updates to the ground– in this case I decided that a bit less than 0.3Hz was adequate.

Sending Data to the Particle Cloud

At first I experimented with using the Particle.variable methods to make individual bits of telemetry available in the Particle cloud (you’ll see this in registerCloudVariables).  This seems like a nice API for making one or two variables available for reading in the cloud — however, there’s not much device-side control over when these variables are published to the cloud, and there is a nontrivial amount of data overhead to each publication.  Also, reading these variables from a cloud client side requires RESTful polling (see eg the various setInterval functions in the mavleash.html file).

Next I experimented with Particle.publish, which allows you to publish a named event along with some payload data.  The advantage of this is that events can be published at a moment of your choosing, and you can carefully control what payload data is sent.   The first payload data format I tried was simple JSON (see publishStateAsJson ), where summary telemetry data was sent as name-value pairs. One big advantage of using JSON as the payload format is that it’s human-readable.  Another big advantage is that there are numerous JSON parsing libraries available for decoding the JSON (it’s almost trivial to read into javascript, as you can see from the statejson_eventSource callback function).

Publishing with Particle.publish and JSON worked fine, but I still wanted my payloads to be smaller. So next I moved to using CSV (comma separated values) as the payload format (see publishStateAsCSV).  The advantage and disadvantage of CSV is that there are no variable/parameter names– it’s just an ordered series of values, and you need to always send every parameter so that the order is maintained.  In practice, with just seven parameters sent in the payload, I saw about a 40% savings in payload size.

Reading Data from the Particle Cloud

As a proof-of-concept, I created a mavleash.html page that accessed the Particle event API using javascript.  I’ve left the code with both JSON and CSV readers enabled, and commented-out the Particle.variable polling code, so you can see how those can be used.

For flight testing, I loaded the mavleash.html file locally onto my iPhone using Coda, and entered my Electron’s specific device ID and access token.

iOS Mobile client
iOS Mobile client

In addition I created an IFTTT recipe to log events sent from the Electron in a Google Drive spreadsheet:  Each time the Electron publishes its position (as a ‘statecsv’ event), it creates another row in a spreadsheet.  This can help you figure out where your drone went even if you don’t have mavleash loaded on your iPhone.

Sending Commands

In case of flyaway emergency, I wanted to be able to send a few commands to the drone from the mavleash app:

  • Land.  Cause the drone to land now, wherever it is
  • RTL.  Cause the drone to return to where it was launched (un-flyaway)
  • Reset. Cause the Electron itself to reset, for testing purposes.

I implemented support for this in the handleCommand methods.  Since Particle.function only supports four functions currently,  I decided to make the actual command a string parameter. (See landCommand for an example of sending the commands using the cloud API.)

Mounting on the Drone

Here’s an overview of the components you need to mount on the drone:

Electron components to mount
Electron components to mount

Your drone might be different.

For flight tests, I mounted the required components on an F450 quadcopter:

Electron mounted on F450 arm
Electron mounted on F450 arm

There are probably better ways to mount the Electron, but Blenderm tape works just fine. In this configuration I mounted the Electron’s taoglas antenna with the backing tape attached to one of the quad’s ESCs– this means the antenna traces were facing the ground.

From testing I also decided that I didn’t need to use the external battery supplied with the Electron– it appeared that the power supplied by the Pixhawk’s +5V line was sufficient.

Flight Testing

The next step was to try this in the real world.  I took the F450 out to a local flying field:

Electron in field
Electron in field

After powering up, I waited a few minutes for the Electron to connect to the 3G network– however it never obtained a connection.  Thinking that this might be due to the Electron’s antenna facing the ground, I held the quad aloft for about a minute after power-cycling the quad, and sure enough the Electron started “breathing cyan”, indicating it had a 3G connection.

The first flight I placed the vehicle into position hold mode, then moved it around a bit as I watched the position indicator on the mavleash app. Sure enough, the tracking was working.

The next few flights, I tested sending commands to the vehicle.  I flew the vehicle in position hold mode, turned off my RC transmitter (to simulate going out-of-range during a flyaway) then sent a series of commands to the vehicle using the buttons on the mavleash app:

  • Land.  This worked perfectly.  Within a second after pressing the Land button the mavleash.html app, the vehicle started an automated landing.
  • RTL.  This did nothing, as my ground tests indicated it might.  The “native PX4” software stack I installed on the Pixhawk doesn’t have a simple RTL handler, and I think that in position hold (POSCTL) mode, it disallows RTL.  Arducopter apparently does support the simpler RTL command, but I haven’t tried it.
  • Reset.  I tried remotely resetting the Electron itself, and this worked as well. Telemetry stopped flowing from the vehicle for about a minute, then started flowing again.

Next Steps

Some folks may wish to run both mavleash as well as a more traditional telemetry radio such as the SiK 915MHz/433MHz, or monitor their mavlink telemetry using MinimOSD.
You can readily split the output line of the autopilot mavlink port and supply it to multiple readers (SiK, MinimOSD, and Electron).  This will provide you with a telemetry stream from the autopilot.  You can configure the wiring so that one of the radios can write commands back to the autopilot’s input pin. For example you could use the MAVBoard Lite to simplify the wiring.
However, if you wish to send commands to the autopilot from multiple sources, this is tricky.  One option would be to use a digital multiplexer (eg NC7SZ157P6X) to switch between, say, the SiK and the Electron.  You could output a mux control signal from one of the Electron’s digital output pins so that when the Electron is sending commands, it owns the autopilot’s input pin.

Precision-guided landing and takeoff of multirotor/VTOL drones

We’ve been working on precision guidance of multirotor/vtol drones for fully automatic takeoff and landing scenarios.  Here’s a glimpse at a recent simulation of one guidance method.

This is an orthographic projection of the drone-landing platform system viewed from above.  The drone is visible as the black-outline circle with colored dots in the center.  The landing platform is visible as a green colored square with numbered and colored dots at the corners.

The large red and blue rectangles represent a projection angle between the drone and the landing platform.  As the drone descends, the angle shrinks.

In this simulation we randomly position the drone somewhere near the landing platform and then use an algorithm to guide the drone to a precise landing.

The platform in this model is scaled to about 1m sides.

Running Lua under nuttx on PX4 flight control board

The PX4FMU is a powerful little flight control board based on the STM32F4 chip. The PX4 development environment currently uses the NuttX embedded RTOS.  In order to build and run apps in the PX4 environment, a developer must rebuild the entire nuttxOS, which has obvious drawbacks.

NuttX does include a simple shell, nsh, which supports a basic scripting language.  However, if you wish to develop complex app logic, nsh is probably not the right tool.

One idea I had was to port a high-level interpreted language to nuttx, to allow developers to script and modify flight control behaviors at runtime, without the need to rebuild nuttx.   So I took the compact Lua interpreter code and ported it to the PX4 environment.

You can find this under my PX4Firmware repository on github.  I have a sample app “pizzahat” that provides a script to the lua interpreter.  The current version is a proof-of-concept that causes Lua to run one of the built-in apps.  This demonstrates that Lua can be used to script all the nuttx builtin applications.

Note that I needed to hack Lua itself significantly to get it running under nuttx: you can find the hacked version (lua-nutt) here.

This is very early-stage, but shows potential.

Edit: I’ve had a couple people ask how big the Lua binary is.  A quick test shows that the px4default .px4 file is 115kB larger with lua-nutt (and the pizzahat test module).  

Inspecting Hilo ITO air traffic using ADS-B and dump1090

I’ve been using dump1090 to pick up local Hilo (ITO) air traffic using a cheap USB software-controlled radio dongle.  Basically dump1090 decodes ADS-B broadcasts that aircraft send out.

Here’s a sample of what Hilo air traffic looks like:

Hex    Flight   Altitude  Speed   Lat       Lon       Track  Messages Seen  . 
--------------------------------------------------------------------------------
ac9ca4 0 0 0.000 0.000 0 140 875 sec
a17371 0 0 0.000 0.000 0 305 192 sec
aafd2b 0 0 0.000 0.000 0 113 2916 sec
a02b4b 4525 0 0.000 0.000 0 734 418 sec
a5f825 13500 0 0.000 0.000 0 723 293 sec
a79130 0 0 0.000 0.000 0 533 1636 sec
a02b4f 0 0 0.000 0.000 0 410 1851 sec
adb2b3 25 0 0.000 0.000 0 557 295 sec
a1824d 0 0 0.000 0.000 0 419 420 sec
aa3f28 -100 0 0.000 0.000 0 317 0 sec
a5e949 1525 0 0.000 0.000 0 25 0 sec
a5df82 0 0 0.000 0.000 0 364 1165 sec
aa341c 0 0 0.000 0.000 0 1 532 sec
ab8752 0 0 0.000 0.000 0 103 4495 sec
a7901c 0 0 0.000 0.000 0 370 6448 sec
7c4836 0 545 0.000 0.000 50 1 21807 sec
a91d39 0 442 0.000 0.000 58 2 22865 sec
ac90b0 17900 0 0.000 0.000 0 3 23607 sec
ab4730 14350 0 0.000 0.000 0 224 23683 sec
a3c847 -125 0 0.000 0.000 0 846 18545 sec
a4b5a7 0 0 0.000 0.000 0 177 310 sec
a17728 0 0 0.000 0.000 0 177 105 sec
a5f46e 0 0 0.000 0.000 0 748 4 sec
a2d1b2 0 0 0.000 0.000 0 201 2760 sec
a61836 717 10750 0 0.000 0.000 0 2523 5299 sec
a61bed 717 10250 0 0.000 0.000 0 1509 8413 sec
a5ff93 11000 0 0.000 0.000 0 686 3908 sec
ac9b28 0 0 0.000 0.000 0 149 8414 sec
a7046f 0 0 0.000 0.000 0 2870 23 sec
a60701 10225 0 0.000 0.000 0 657 10346 sec
a6147f 717 9950 0 0.000 0.000 0 1130 12495 sec

If you lookup the ICAO hex identifiers, you’ll see that a good chunk of them are tour helicopters (for example: a17728)

It appears that currently dump1090 is misinterpreting some of the ADS-B data that the aircraft are broadcast.  For example, ICAO hex a6147f is a Boeing 717 operated by Hawaiian Airlines; however, dump1090 lists “717” in the “Flight” field.

We’ll keep playing with this.

PX4FMU and PX4IO setup with APM

Ok, I was trying to setup my PX4IO board with the latest firmware.  This took way longer than it should, possibly because the instructions spread out on the various pixhawk/diydrones sites are outdated and rambling. Here are the steps that worked for me.

  1. Disconnect the PX4IO from the PX4FMU, if it’s already connected. I was never able to get this to work smoothly with the two already attached. 
  2. Flash the standalone PX4FMU with the latest APM firmware. I used the qupgrade tool with the PX4FMU board connected via USB.
  3. Place the latest “px4io.bin” file onto a microSD card, in the root directory.
  4. Insert the microSD card into the PX4FMU. Remove the PX4FMU from any source of power (e.g. USB)
  5. Attach the safety switch to the PX4IO.
  6. While holding down the safety switch, attach external power to the PX4IO. In my case I used an old NiMH battery pack attached to the power input directly.  
  7. The PX4IO should power up into bootloader mode. The red/amber LED will be pulsing rapidly,  the green light will be solid, and the blue light will be off as detailed here. IF the LEDs aren’t lit properly to indicate bootloader mode, remove power and try again. 
  8. Release the safety switch.
  9. With the PX4IO board still powered, insert the expansion bus pins from the PX4IO board into the PX4FMU.  The good news here is that you don’t need to hold down the safety switch anymore, once you get the PX4IO into bootloader mode. 
  10. Since you’ve already flashed the PX4FMU with the APM firmware, the APM startup script will notice the px4io.bin file on the microSD card and install that on the PX4IO. 
  11. To verify that the PX4IO updated properly, you can look for /APM/px4io_update.log on the microSD card. It should say something like “Loaded /fs/microsd/px4io.bin OK” at the end.
  12. Note that you may need to reboot after this.  You can either use the reset button on the PX4FMU board, or use eg UART1 to issue the “reboot” command in nsh.
That’s it…good luck!

Connecting to MAV using 3DR telemetry radio from iOS device

Today I tried a simple test with the Redpark iOS serial cable for iPhone, trying to connect from a simple iOS test app through the cable to a  3DR telemetry radio in order to send mavlink commands to a receiving radio mounted on a drone.

Unfortunately it looks like Redpark cable only puts out a 3.3V logic level signal.  This doesn’t work with the 3DR radio, which apparently needs 5V signals.   Sure enough, plugging the radio into the cable results in some flaky behavior.
My next step will be to try a logic-level shifter between the 3dr radio and the Redpark cable.  I may also introduce a 5V power supply to power the radio separately from the iPhone supply.