Author: Simon Birrell

How to structure a Python-based ROS package

My apologies for the long wait since the last post. I started a PhD in AI and Robotics at Cambridge University, which has been absorbing all my spare time. This post is actually quite old, but I never got around to publishing it. The good news is that I’ve learned a ton about robotics, biomimetics, ROS, MoveIt, deep learning, the Jetson TX1 and more, all of which I hope to share in due course. My apologies also for comments that waited months to be approved. When I finally dragged myself back to comments section I found only 4 real comments out of more than 600 bits of spam.

This rather specific post grew out of a frustrating attempt to write a fairly complex ROS package in Python which was continually stymied by seemingly random import errors when my node was launched in different ways. In fairness, I am neither an experienced Python programmer nor a ROS expert, so my problems were due to ignorance. However, clearing up that ignorance was non-trivial and led me down one rabbit hole in terms of how Python manages packages and another rabbit hole in terms of how ROS does it.

This post is intended to be a shortcut for novice ROS developers who want to develop a Python node that involves more than one source file and imports Python libraries.

Continue Reading

Adding an SD Card to your Deep Learning Robot

The Deep Learning Robot comes with 16Gb built-in flash on the Jetson TK1 board. That’s fine to begin with, but after downloading a few Caffe models, you’ll be out of space. Fortunately, the TK1 comes with an SD Card slot for adding extra storage. This post describes how to add and configure an SD Card to give yourself more room.

SD Card on the Deep Learning Robot
Continue Reading

Docking and Recharging the Deep Learning Robot

The Kobuki charger costs an extra $49 when you buy the Deep Learning Robot and is well worth throwing into the package. With a few simple commands you can get your robot to dock and recharge itself, providing it is in the general vicinity of the charging station. The following is adapted from the Kobuki / ROS tutorials.

Here’s a video of docking place in my crowded living room. Please forgive the baby noises:

Continue Reading

A Deep Learning Robot that recognises objects

Saturday night at home. The Deep Learning Robot “Dalí” now trundles round the house identifying objects and saying their names, with variable success. In the video you can hear it identify correctly “cradle”, “studio couch”, “lampshade” and “home theatre” (it’s an American neural network). However there’s a surreal moment when it sees a non-existent “grand piano”.

The object recognition is with Caffe as described previously, with a few new ROS nodes to do the speech. More details as soon as I have a proper fix for the Bluetooth Speaker pairing issue.

The Art of Seeing: Integrating ROS, Caffe, OpenCV on the Autonomous Deep Learning Robot

recognising a washing machine

The frustrating thing about robotics is the amount of time you have to spend on problems that aren’t really to do with robotics at all. The Autonomous Deep Learning Robot comes with OpenCV4Tegra, a specially accelerated version of the OpenCV vision library. It comes with Caffe, a neural network tool that allows you to do pretty good object recognition. It also comes with Robot Operating System (ROS).

So how hard should it be to make these work together and get your robot to recognise the objects that it sees?

HARD!

Continue Reading