Software, Projects, and Manuals
ROBOT PERCEPTION, DECISION-MAKING, CONTROL
Deep learning for semantic image segmentation
Tools: TensorFlow, Keras, Python, Unity game engine
The objective of this project was to design a deep neural network allowing a quadcopter to locate a target of interest (person) in a series of images captured with it’s camera. Using this information, the quadcopter is able to follow around the target in an environment populated with other virtual humans and objects.
The image below shows the quadcopter following the target of interest, and the images processed by the neural network (right -“pyqtgraph”). The target of interest is denoted by a dark purple silhouette, while the other people are denoted by green silhouettes.

The chosen architecture was a fully convolution neural network which resulted in a final score of 0.46 (above the base requirement of 0.4). This network was later used in simulation, to enable the quadcopter to follow the target through crowded and uncrowded areas in a virtual environment.
The following video shows the quadcopter following the target in simulation:
You can find more details on this project in this repository.
Perception for pick and place
Tools: ROS, Gazebo, Rviz, Python
The objective of this project was to construct a perception pipeline to allow the PR2-robot to recognize specific objects in a cluttered environment for pick and place operations. The pipeline takes as input noisy data from the robot’s RGB-D camera and outputs .yaml files containing objects labels, their pick and place positions, and the arm to be used during these operations.

The perception pipeline consisted of 3 mains parts:
- Part 1: Filtering and RANSAC plane fitting to clean the image and isolate the region of interest.
- Part 2: Clustering for segmenting the scene into individual objects.
- Part 3: Feature extraction, SVM training and object recognition
Given a list of objects to pick from the scene, the pipeline is used to recognize the desired objects and assign them labels. This information is later used to determine the object’s pick and place position and the arm to be used in this operation.
Below is an example scene where this pipeline was used for object recognition. The robot was able to recognize 8/8 objects:

You can find more details on this project in this repository.
Pick and place
Tools: ROS, Moveit!, Gazebo, Rviz, Python
The objective of this project was to write an Inverse Kinematics (IKM) solver for the KUKA KR210 robot. This solver is responsible for computing the joint angles corresponding to a desired end-effector or gripper trajectory. The algorithm was tested on pick and place operations, consisting of collecting objects from different locations on a shelf and depositing them on a bin.

In order to create the IKM solver, first, a kinematic analysis was done consisting of the following steps:
- Mathematical description of the robot’s geometry and determination of its DH parameters.
- Computation of its Forward Kinematic Model (FKM)
- Computation of its Inverse Kinematic Model (IKM): Since the robot has a spherical wrist, this problem was decoupled into an inverse position kinematic problem (to determine the first 3 joint variables), and an inverse orientation kinematic problem (to determine the last 3 joint variables).
After implementing this analysis as a python script, the robot was tested in 10 pick and place operations (with different spawn locations). The results show that the robot is able to successfully pick and place the objects 9/10 times while following the desired end-effector trajectories.
The following video shows a pick and place operation in Rviz (2x actual simulation speed):
Here, is another example in in Gazebo (4x actual simulation speed):
You can find the code and a detailed writeup of this project in this repository.
Search and sample
Tools: Python and Unity game engine
This project was modeled after the “NASA sample return challenge”, its objective was to enable a rover to navigate autonomously in an unknown environment by developing its ability perceive and decide. To achieve this, two modules were developed: a perception and ad decision-making module. The perception module was used to detect obstacles, rocks (sample of interest), and navigable terrain based on computer vision techniques such as perpective transform, color space transformation, thresholding and distortion reduction. The following video shows how the rover is able to perceive the navigable terrain (blue), rocks (yellow), and obstacles (red) to update its worldmap.
The decision-making module consisted of a decision tree which ensured obstacle avoidance and a wide exploration of the navigable terrain. After implementing the previous perception and decision steps. The rover was launched in autonomous mode several times. A mean of its performance was made over 10 trials with respect to the base requirements for the project (40% mapping at 60% fidelity). The rover is able to map at least 40% of the terrain with an accuracy of approximately 81% while finding at least a rock. The rover is of course capable of mapping more terrain and finding more rocks, but the previous statistics were computed to compare its performance with the base requirements.
The following video shows the rover navigating autonomously while mapping 81% of its environment with 65% fidelity and several rock sample detections.
You can find the code and a detailed writeup in this repository.
ROBOT DESIGN
ARACHNIS: A GUI to design and analyze cable-driven robots
Tools: MATLAB
ARACHNIS is a graphical user interface for the analysis and parametric design of Cable Driven Parallel Robots (CDPRs). This interface takes as inputs the design parameters of the robot, the task specifications, and returns a visualisation of a set of workspaces to assess the designs. These workspaces are the Wrench Feasible Workspace (WFW) and the Interference-Free Constant Orientation Workspace (IFCOW). The WFW is traced from the capacity margin, a measure of the robustness of the equilibrium of the robot. On the other hand, the IFCOW is traced via an existing technique for determining the interferences between the moving parts of CDPRs.

Reference
- Ana Lucia Cruz Ruiz, Stéphane Caro, Philippe Cardou, François Guay.“ARACHNIS: Analysis of Robots Actuated by Cables with Handy and Neat Interface Software”, In Proceedings of the Second International Conference on Cable-Driven Parallel Robots. Link
Download
Click here to download the latest verion of the interface.
A 3 DOF PPR parallel robot
Tools: CATIA, MATLAB
This project consisted in designing a parallel robot for assembly operations. To perform this task, the desired mobility of the robot was two translations and one rotation. Such mechanisms are commonly known as cylindrical parallel mechanisms and they are usually employed in machining operations. To comply with the desired assembly task and motion pattern, the mechanism was designed according to the following priorities:
Priority 1: Be capable of moving throughout a circular regular positional workspace
RPW of diameter 500 mm;
Priority 2: The moving platform should have a rotation range equal to ±30 degrees for any
position of its geometric center within the RPW;
Priority 3: The mechanim should be light;
Priority 4: The legs of the mechanism should be identical, namely, the mechanism
should be symmetrical;
Priority 5: The point-displacement of the geometric center of the moving-platform
should be smaller than 1 mm for a payload equal to 500 N (the payload is supposed to
be normal to the moving platform);
Priority 6: The rotational error of the moving platform should be smaller than 1 degree for
a moment equal to 100 N.m about the axis passing through the geometric center of the
moving platform and normal to the latter.

Download
Coming soon!
For a detailed report on how the mechanism was designed just drop me an email.
MOTION ANALYSIS/CONTROL
Low-dimensional control strategies for virtual characters
Tools: MATLAB, SimMechanics, Non-negative matrix factorization toolbox
This research work was done within the framework of the project ENTRACTE (Anthropomorphic Action Planning and Understanding), winner of the Grand Prix de l’ANR and funded by the French National Research Agency.
The objective of this project was to design simple and compact motion controllers for virtual characters by analyzing how humans control motion. To do this, experiments were conducted in which humans were asked to perform different motor tasks while muscle activity and kinematics were recorded. From this data, an analysis was done to extract the underlying control strategies (or synergies) using factorization and machine learning techniques. Such controllers were then adapted to command virtual characters in physics-based environments.
Reference
- Ana Lucia Cruz Ruiz, Charles Pontonnier, and Georges Dumont. “A synergy-based control solution for overactuated characters: application to throwing”, Computer Animation and Virtual Worlds, 2016. Link