Frequently Asked Questions (FAQ)#
Which Operating System should I use?#
- For desktops and laptops, we recommend a GNU/Linux (what is Linux? (Spanish)) distribution (what is a distribution? (Spanish)) on a native partition (what options do I have to install a distribution? (Spanish) and how to install a native partition (Spanish)). The specific GNU/Linux distribution we recommend is Ubuntu (
Ubuntu Desktop
). You are free to choose between version16.04 LTS
and18.04 LTS
. Either one is a good option, although we have slight more support for16.04
. - For robot on-board CPUs, you can read through a long conversation at: questions-and-answers#20.
I was told to install something. How can I do that?#
- Please make sure you carefully read and understood the dedicated section at: how do I install programs on Linux? (Spanish)
- Each of our repositories usually contains instructions for installing, e.g. the initial
README.md
of https://github.com/roboticslab-uc3m/vision links to its doc/vision-install.md documentation file.- Note 1: Don't know what a repository is? Please read: Control de versiones (Spanish)
- Note 2: This manual contains an index of our repositories: HERE
- For instructions on installing 3rd party software, please see a special repository we maintain: https://robots.uc3m.es/installation-guides/
I see a lot of commands for installation but do not understand anything. What do they mean?#
- Please read: Linux - Bash (Spanish)
I've heard lots of stuff about Git and GitHub. What do they mean?#
How should I program stuff?#
- Please make sure you carefully read and understood the dedicated section at: Best Practices: Programming
How should I document stuff?#
- Please make sure you carefully read and understood the dedicated section at: Best Practices: Documenting
How can I record data for Programming by Demonstration (PbD) a.k.a. Learning from Demonstration (LfD)?#
First, if moving the robot by hand, you'll want some gravity compensation to help out. That's [gcmp]
off BasicCartesianControl
. Refer to the BasicCartesianControl documentation for reference.
Once you have publishing services running (robot joint/cartesian state, sensors output), there are two options for recording data:
-
Manually grab individual "waypoints" or sensor data and store in file(s). For instance, in the joint space, do some
yarp rpc /robotName/manipulatorName/rpc:i
to get joint positions. -
To record full trajectories (data stream of a certain YARP port) at a given sample rate, use yarpdatadumper. To record from several YARP ports, yarpdatadumperAppGenerator can be used to generate a yarpmanager app of yarpdatadumper components.
An example of recording a left arm trayectory of TEO:
- Terminal 1:
launchManipulation # Part of teoBase
- Terminal 2:
yarpdatadumper --name /leftArm # the data.log and data.log files will be saved in a new `leftArm` directory
- Terminal 3:
yarp connect /teo/leftArm/state:o /leftArm
How can I play back data recorded for Programming by Demonstration (PbD) a.k.a. Learning from Demonstration (LfD)?#
Depending on options above:
-
You can use the waypoints in a program as in this example.
-
Use stuff from our tools repository. Specifically, you'll want the PlaybackThread. You can find an example of use at examplePlaybackThread and its corresponding test.
Note: There are several alternatives to these approaches, but these are kind of nice. yarpmanager has some record/playback facilities, but we haven't really tried them. Additionally, yarpdataplayer is the packaged YARP utility for playback. However, these interfaces have their playback capabilities tightly coupled to their GUI code. The previously mentioned components from the tools repository are lightweight and can be used independently as they are not coupled with any graphical interface.
How does the iPOS PT Mode work?#
PT Mode
performs at a fixed rate at driver level. This is great, because it's real-time right next to the motor, so network latencies will not affect performance of set of a pre-defined joint-space targets (positions). Not justifying how it's implemented, but providing the reason why they actually did it as it is. Naïve options:
- First receive (e.g. via CAN-bus) all the trajectory, then execute each target at the exact time given the fixed period. The issue with this is: how much memory should we reserve for this? What happens if somebody wants to run a trajectory with thousands or millions of intermediate targets?
- Receive the next target (e.g. via CAN-bus), execute it at exactly the planned time given the fixed period, repeat. The issue with this is: what happens if a target arrives late?
None of these options is the implemented solution. The iPOS implementation is an intermediate solution, essentially a FIFO memory with 8 buffer positions (would have to check the iPOS manual for the specific correct value). So, you start filling it in, once it is initially full you start running, and then continue feeding it targets (e.g. via CAN-bus) at the rate established by the fixed period.
- If you feed it too slow, the buffer will empty before time and movement will stop.
- If you feed it too fast, the buffer will get full (you'll see a
pt buffer full!
message in our TechnosoftIpos implementation).
Hence, best to feed it at the most precise rate possible. Take into account that a PeriodicThread (YARP's old RateThread
) will be more precise than adding a fixed delay at the end of your loop. You'll be asking yourself if there is a minimum threshold. The answer is yes, and this minimum should be estimated by the time consumed by CAN-bus communications to feed all the individual drivers per period.
How can I change the RGB-D sensor resolution?#
We use the YARP OpenNI2DeviceServer
device for this. In teoBase.xml#L36 you can see an example instance:
yarpdev --device OpenNI2DeviceServer --depthVideoMode 4 --colorVideoMode 9 --noRGBMirror
If you want to know what values you can use for --depthVideoMode
and --colorVideoMode
instead (and the actual meaning of the current values), please launch:
yarpdev --device OpenNI2DeviceServer --printVideoModes
I've found some broken links in your repositories, which have been renamed?#
Most of this was done at https://github.com/roboticslab-uc3m/questions-and-answers/issues/2
- https://github.com/roboticslab-uc3m/teo-body -> https://github.com/roboticslab-uc3m/yarp-devices
- https://github.com/roboticslab-uc3m/teo-head -> https://github.com/roboticslab-uc3m/vision and https://github.com/roboticslab-uc3m/speech
- https://github.com/roboticslab-uc3m/teo-main (old version) -> https://github.com/roboticslab-uc3m/kinematics-dynamics
- https://github.com/roboticslab-uc3m/best-practices -> https://github.com/roboticslab-uc3m/developer-manual
- https://github.com/roboticslab-uc3m/teo-software-manual -> https://github.com/roboticslab-uc3m/teo-developer-manual
I have read this page and related links, and I have doubts and/or comments. What should I do?#
Please follow these steps:
- Read the Asking Questions section as many as times as required to succeded with its self-evaluation.
- Follow its recommendations, which you will know because you have succeded in its self-evaluation. ^^