Category: Coding

RoboJuggy at JavaOne

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson.  I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David’s friend and collaborators at  OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results.

Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden’s RSMedia and various versions of David’s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk.

Hunting and gathering

I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the “netinst” Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don’t need. I’d recommend anyone interested I’m duplicating our work to stay their journey there:

Raspbian UA Net Installer

Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions:

Setting up ROS Hydro on the Raspberry Pi

Building from source means that all your install ends up being “isolated” (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:

sudo dd bs=4M if=/dev/sdb | gzip > /home/your_username/image`date +%d%m%y`.gz

Getting Java in the mix

Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn’t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM.

Java 8 JDK for ARM

At this point you are ready to start installing the ROS Java packages. I’m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the “install from source link” in the page to see the right stuff:

Installing ROS Java on Hydro

Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won’t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently “chain” workspaces in ROS but I didn’t understand this well enough to get it working so what I did was this:

> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y

and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.

> catkin_make_isolated --install

Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same “build from source” installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver.

Putting together the pieces

Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box:

wpid-20140911_142904.jpg

Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores.

Complete pan and tilt rig with motors for $25

A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won’t regret buying it.

Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net.

That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don’t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later.  Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly:

wpid-20140921_114742.jpg wpid-20140921_113054.jpg wpid-20140921_112619.jpg

Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes.

Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren’t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here:

Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I’m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress.\

I was also connecting the pan and tilt armature onto the puppet’s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good!

I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not “built in”. I had tried to follow the provided instructions but was not (and still have not) been able to get that working.

Using “unofficial” messages with ROS Java

I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions:

Connecting to the RSMedia Linux Console Port

After some sweaty time bent over a magnifying glass I had success:

wpid-20140921_143327.jpg

I had previously purchased the USB-TTL232 accessory described in the article from Dallas’ awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner.

It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn’t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful  resource which let me get all kinds of body movements going:

A collection of useful commands for the RSMedia

At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry’s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going.  Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work.

The last day was spent doing things that I wouldn’t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully.

I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything.

Branching Bad

My friend James Spargo recently sent me this article on feature branches by Martin Fowler in which he criticizes the “feature branching” approach to software development. In essence, feature branching endorses an approach where changes to a computer program are made in an isolated container (the “branch”) until they reach a certain level of readiness. Mr. Fowler believes, instead, that these changes should be folded in quickly so that the there is communication among the other participants in the project.

Mr Fowler is a smart man but at Brainfood we feel like our lives became less complicated and more productive with the adoption of feature branching. Our process up until then had classically been all of us pushing into a single branch and rarely keeping many branches around. The excellent and experienced staff at Eucalyptus Systems dragged us into using feature branches and we’ve stuck with it. Having experienced both approaches I’ll briefly outline what I see as the benefits to using feature branches.

Life puts you through big changes

Though Mr. Fowler prefers to steer clear of branches he does advocate the sensible policy that each commit should be “ready for production”.  It isn’t clear to me how this is achievable for non-trivial refactorings. There are plenty of occasions where the changes a piece of code needs are small and can be written quickly and committed but certainly not all situations are like this. The treachery of multi-core programming can easily put you in a position where you need to rethink the way a whole group of components interact and those components may have mature and widely deployed interfaces. At the same time, the show must go on and you may have work that needs to happen immediately while a large complicated refactoring is in process. In a situation like this its hard to see how a “one branch to rule them all” approach is going to get the job done.

Protecting yourself from yourself

Another corollary of the single “always ready” branch model is that change streams probably end up staying on your desktop until they are ready. Mr. Fowler even advocates that “every commit should be ready for production”. To me this adds up to people keeping substantial chunks of work in the working directory of their desktop until they have everything working. This is a recipe for either losing a bunch of work or screwing up a bunch of useful work because you took a wrong turn halfway through. Computers break and people sometimes write bad code after working a long time. With feature branching you can push and push your changes into your own isolated branch without bothering anyone else in the project. If you take a wrong turn you will also have a series of snapshots so that you can roll the tape back to the point where you took a wrong turn.

If you work on multiple computers (home, work, laptop, etc.) pushing to branches in the team repo is also a convenient way to hand your own work between environments. You can definitely get there with a peer to peer approach but sometimes dealing with a single well connected host is easier than the VPNs and SSH proxies that may be required to get all your machines talking to each other. Sometimes, because of policy, it may not be possible at all.

Team communication

Another beneficial side effect of feature branching is that your work in progress is encoded as a convenient artifact that can be passed around to other members of the team for review. In Mr. Fowler’s approach, the team finds out about your changes because they are literally stumbling across them. A formal review process doesn’t seem possible because the commits are frequent, fragmentary and continuous.

With feature branches it is a common practice for one or multiple members of the team to review your branch before it is committed. Since the changes occur in a branch there is the convenience that the work will be cleaned up and thought through before someone else has to grok it. There is more of a chance that the changes could be ill-conceived and have a lot of work in them since they occur in isolation. It seems like the cure to that problem, however, is to have other team members take a peek at your branch before you get too far along.

At some level, each time we begin typing we are potentially taking the whole team on some flight of fancy springing deep from within our mind. With feature branching the rest of the team joins that jaunt with a greater degree of complicity. When the changes are rolling into the master branch all the time you can complain but you will need to be very vigilant or you could find yourself along for the ride.

Frankencode

My final criticism of the CI all-in one-branch approach is that people inevitably make mistakes. I’m not talking about typos but about big conceptual miscues. In the “along for the ride” model you may find that your stream of useful changes is interleaved with some tragically misguided work and you failed to notice it because you were so focused on the task at hand. Your other team member may even realize it and be trying to unroll the changes. Unfortunately, the work is now tangled into your work and you may even depend on parts of their stuff that worked well enough for you to remain oblivious to the problems they discovered.

Certainly there is a danger to people sitting on their work with the feature branch approach. You do not want someone to carve of their own little playground and go on wild sprees of modification while the rest the project leaves them steadily behind. If you are pushing branches back to a central point, however, and tracking progress with a ticket system (we use Redmine Backlogs) then you have great management tools to fight these problems. It seems to me that with the single branch model you have intrinsic problems that can’t be fought through management tools. For us, at least, that has made feature branching seem the better model.

Twitter Flight vs BackboneJS

This blog post started out as a private email but I’ve seen a few other articles tackling the same topic. I thought I might fish some good feedback out of the ether if I made it more public.

I’m involved with an effort to improve the JavaScript UI for the Eucalyptus Cloud Management product. Lately we have been examining whether Twitter Flight or BackboneJS represents a better way forward. Flight provides very good infrastructure for managing events. Perhaps somewhat questionably, they extend this excellence into an argument that there is no need to manage the model or the flow of data back and forth to the server. From the Flight site:

While some web frameworks encourage developers to arrange their code around a prescribed model layer, Flight is organized around the existing DOM model with functionality mapped directly to DOM nodes.

Not only does this obviate the need for additional data structures that will inevitably influence the broader architecture, but by mapping our functionality directly onto the native web we get to take advantage of native features.

I don’t know whether they are trying to say that the need for models is “obviated” entirely or merely that it can be decoupled from the view event management framework. Even if they believe its the former their own code speaks to it being the latter. When you examine the Flight demo mail app you find an anemic sort of model infrastructure managing “contacts” and “mail” with plain-jane collections of objects:

https://github.com/twitter/flight/blob/gh-pages/demo/app/data.js

You can then see those collections being used to populate views:

https://github.com/twitter/flight/blob/gh-pages/demo/app/component_data/mail_items.js

This is really very similar to Backbone except with far less power and utility. In Backbone, there are tools for paging large data sets, synchronization of collections with the server, transparently storing data and a lot more. If you aren’t familiar, glance over the capabilities it provides here:

http://backbonejs.org/#Model

If you wanted to extend Twitter’s demo mail app into something full-featured like Zimbra or RoundCube then it doesn’t take much imagination to see yourself, piece by piece, re-implementing the tools that Backbone already provides.

There is no doubt that Flight is innovating in the view space. They also seem to have added some real value managing the UI view lifecycle and tearing down resources. I must admit that I am attracted to the picture they are painting. At the same time, I am sure that a UI event infrastructure does not an application make. Any serious effort is going to find itself in need of a robust infrastructure for managing the movement of data between the client and server. I would love to see a fusion of Flight’s events with Backbone’s models. This kind of decoupling and “small sharp tools” is the stuff that developer dreams are made of.