A recent technical challenge required me to dig deeply into how UDP ports are "bound" - that is, reserved or allocated - in the Linux TCP/IP implementation. It ended up being one of those cases where I had an intuition how things worked, then I found some evidence suggesting that my intuition was wrong, and in the end I discovered it was correct after all but in a different way than I'd expected. Along the way, I wrote and nearly published a different post that would have perpetuated some misperceptions about UDP port binding in Linux. Therefore, I am writing this post instead in an attempt to promulgate correct information.
If you're in a hurry to get to the punchline, it's this: you CAN bind more than one UDP socket to the same address and port in Linux; skip to the section titled Testing the solution to see how.
Background
In my last post, I mentioned working on a secondary controller, or "payload," for small autonomous aircraft. Our research team uses ROS to connect a variety of interrelated software components that run on this payload. The component I discussed before is the "bridge" between ROS and the aircraft autopilot; another component is the bridge between ROS and the wireless network that connects that aircraft to all other flying aircraft as well as to ground control stations.
We currently use 802.11n wireless devices in ad hoc mode, and optionally a mesh routing protocol (such as B.A.T.M.A.N. Advanced) that enables aircraft to act as relays, repeating messages on behalf of other aircraft that are out of range of direct transmission. Our command and status-reporting protocol is built on top of UDP, and we use either IP unicast or IP broadcast depending on the type of message being sent. Command messages from ground control stations to aircraft may be either unicast or broadcast; reports from each aircraft are always broadcast because other aircraft need to know its position for formation flight and collision avoidance.
To test software before we fly it, we run one or multiple Simulation-In-The-Loop (SITL) instances on one or more computers; each SITL instance includes the autopilot software, a simulated flight dynamics model, and the payload software. Because each instance of the payload software needs to communicate via UDP unicast and broadcast, both with other SITLs on the same computer and SITLs on other computers, we need a way to open multiple UDP sockets that can send and receive broadcasts to each other on the same port at the same time. Whether or not this is supported turns out to be a matter of great confusion.
The 60-second-or-so guide to UDP broadcasts (in Python)
As I noted in my previous post, most of the team develops in Python. Sending and receiving UDP broadcasts in Python is quite easy; to set up a socket and send a datagram to a broadcast IP address and arbitrary UDP port is all of four lines of code (not counting error-handling):
Reaching beneath the layers of complexity
to find that,
at the end of the day,
it's all just bits
Sunday, December 7, 2014
Wednesday, July 9, 2014
A Python ROS bridge to MAVLink-based autopilots
My main project over the past few months has been developing a secondary controller (some call this a "companion computer;" we usually call it the "payload") for a small, fixed-wing autonomous aircraft flying a commercial off-the-shelf hobbyist autopilot that speaks the MAVLink protocol:
The purpose of the payload is to provide higher-level mission and path planning, inter-aircraft coordination, and an interface to ground control stations; this allows the autopilot to remain dedicated to immediate guidance, navigation, and control tasks and to maintaining flight safety in general. As a research group, our aim is to develop the capability for up to fifty of these aircraft to operate cooperatively and to execute a complex mission with only high-level commands from the ground.
We chose to build the payload software on top of the Robot Operating System (ROS). ROS offers two important features: first, an inter-process communication middleware that includes a publish-subscribe model and a service request-response model. Second, a vast open-source library of software components, or "nodes," ranging from extended Kalman filters to laser rangefinder drivers to coordinate frame transforms. Using ROS, we can create (and leverage) a collection of nodes onboard the payload to communicate across a wireless network with other aircraft and the ground, perform path planning, process sensor data, and so on.
One of these nodes, necessarily, must be a driver (or "bridge") between the autopilot and the other ROS nodes. In our case, it must map MAVLink messages from the autopilot into ROS messages, and ROS subscribers and services commands and queries into MAVLink messages to the autopilot. Of course, this mapping isn't one-to-one in most cases. MAVLink specifies transaction protocols for retrieving and updating things like the list of waypoints; a single ROS service can request the list of waypoints, wait while the transaction takes place, and return a single response with the entire list. Other operations can (and should) be one-to-one and even idempotent: changing the guidance mode of the autopilot, or (dis)arming the throttle.
As it turns out, there are a handful of bridges between MAVLink and ROS out there already, including:
As it turns out, there are a handful of bridges between MAVLink and ROS out there already, including:
Of these, mavros is the most full-featured. It offers a healthy set of publishers conveying various aspects of the autopilot state, and subscribers to handle commands to the autopilot. It also offers a modular plugin architecture so that other developers can add on new ROS publishers, subscribers, and services. It is written in C++ (ROS officially supports both C++ and Python, and Matlab support has recently been introduced), which can be a boon for efficiency and a bane for complexity. We ideally would like the flexibility of its modular architecture, but our team has greater access to Python programmers than to C++ programmers. Further, we anticipated some unique interfacing needs that might merit a custom approach.
Thus, I took the unreasonable course of action and decided to roll my own autopilot bridge, creatively named autopilot_bridge. It is loosely inspired by roscopter but adopts its own modular architecture in the spirit of the popular MAVLink-based ground control station software MAVProxy.
You can find autopilot_bridge, and more documentation on installing and running it, on GitHub:
You can find autopilot_bridge, and more documentation on installing and running it, on GitHub:
https://github.com/mikeclement/autopilot_bridge
Someday, when some stray free time turns up, I would love to share some of the lessons I learned about dynamically loading Python code and interfacing with protocol state machines. In the meantime, it's all in the code :)
Someday, when some stray free time turns up, I would love to share some of the lessons I learned about dynamically loading Python code and interfacing with protocol state machines. In the meantime, it's all in the code :)
Saturday, February 22, 2014
Multiple AR.Drones from a single computer using the ardrone_autonomy ROS package
I'm currently working with a small graduate student team on centrally controlling a group of Parrot AR.Drone quadcopters or "quads" using the Robot Operating System (ROS), a software middleware for connecting robotic system components. There is a ROS driver called ardrone_autonomy for the AR.Drone, which wraps the AR.Drone SDK and exposes its controls and feedback to ROS' publish-subscribe model. This works well for controlling a single quad from a single computer. However, we ran into issues when trying to control multiple quads from a single computer, seemingly because the underlying SDK was designed for single-computer to single-quad use.
A lot of other groups have posted interesting solutions for controlling multiple quads. The AR.Drones use 802.11 wireless networks for control, feedback, and video; some solutions we examined focus on reconfiguring all quads to use a common network and then use multiple computers to control them. Others go a step further and allow control of multiple quads from a single computer. However, no solution that we've encountered allows both control and feedback/video of multiple quads from a single computer (at least, not using stock AR.Drone firmware).
One of the students, Brenton Campbell, and I set out tofly, crash, and break multiple quads at once hack some code and develop a solution for full bi-directional communications. This post documents and provides all code necessary for multi-quad control, feedback (navdata), and video from a single computer. It uses a modified version of the ardrone_autonomy ROS package and its included version of the AR.Drone SDK (v2.0.1, internally termed ARDroneLib), combined with some network hackery. Further, our solution requires no manual or permanent changes to the quad; it is entirely coordinated from the computer and can "automatically" reconfigure and utilize out-of-the-box quads. It has only been tested on v1.0 quads, but as far as I've read, it should apply equally to v2.0 quads.
I should note that this solution draws on insights gleaned from the above-referenced posts, and also:
The solution described herein might void warranties, violate license agreements, and (though exceedingly unlikely) render your AR.Drone unusable. My description assumes a fair amount of familiarity with Linux, comfort using the command line, and a bit of C and Python programming skill. Use this information and code at your own risk!
The solution can be broken into three steps, each of which I discuss below:
A lot of other groups have posted interesting solutions for controlling multiple quads. The AR.Drones use 802.11 wireless networks for control, feedback, and video; some solutions we examined focus on reconfiguring all quads to use a common network and then use multiple computers to control them. Others go a step further and allow control of multiple quads from a single computer. However, no solution that we've encountered allows both control and feedback/video of multiple quads from a single computer (at least, not using stock AR.Drone firmware).
One of the students, Brenton Campbell, and I set out to
I should note that this solution draws on insights gleaned from the above-referenced posts, and also:
The solution described herein might void warranties, violate license agreements, and (though exceedingly unlikely) render your AR.Drone unusable. My description assumes a fair amount of familiarity with Linux, comfort using the command line, and a bit of C and Python programming skill. Use this information and code at your own risk!
The solution can be broken into three steps, each of which I discuss below:
- Reconfiguring network settings on individual AR.Drones
- Remapping UDP ports to unique ground-side port numbers
- Modifying the AR.Drone SDK and ardrone_autonomy package to customize UDP ports
Update 3/26/2014 - Following a discussion thread on the ardrone_autonomy GitHub project, another contributor, Kenneth Bogert, discovered an alternative to steps 2 and 3 above. It turns out that when the computer sends UDP probe datagrams to a quad's video and navdata ports, the quad captures the source ports and uses them as its destination ports. By modifying the SDK to use ephemeral client-side ports, both video and navdata work and no port remapping is necessary. Kenneth posted a pull request with his modifications here.
Update 11/20/2014 - An updated version of Kenneth's pull request was merged into ardrone_autonomy on the 9th.
Update 11/20/2014 - An updated version of Kenneth's pull request was merged into ardrone_autonomy on the 9th.
Subscribe to:
Posts (Atom)