Devices and Sensors

This section describes the sensors and devices used by HandsFree. It will be divided into Vision, lidar, Upper Layer Controller and so on to be introduced separately.

1,Vision Sensors

The vision sensors supported by HandsFree are Xtion Pro, Xtion 2, Kinect 1, Kinect 2, ZED Camera and so on.
We recommend using Xtion 2, which is relatively new and only needs to be powered by USB 3.0. The following is a brief introduction to these vision sensors.

Let's start with a comparison of parameters.

Properties Xtion Pro Xtion 2 Kinect 1.0 Kinect 2.0
Length(cm) 18 11 28 25
width(cm) 3.6 3.5 6 8.5
Height(cm) 5 3.5 7.5 6.5
Effective Depth of Field(m) 0.8-3.5 0.8-3.5 0.8-4.0 0.5-8.0
Effective Viewing Angle(degree) 70 74*52 57*43 70*60
power/interface usb2.0 usb3.0 adapter and usb2.0 adapter and usb3.0

Kinect 2.0 weight 1.25kg, plus 1kg of adapter. It is not particularly recommended to be loaded on the robot.

The following is presented in the recommended order.

1.1, Xtion 2

Xtion 2 is a relatively new depth camera, with a significant reduction in body and a slight improvement in performance compared to the first generation. The market price is around $2000 and is mainly recommended for mobile development. Its features are as follows:

  • powerful depth sensing: both accurate and expandable 640 x 480 depth resolution
  • High RGB resolution: up to five megapixels (2592 x 1944) image resolution
  • Power saving: USB 3.0 low power consumption
  • Compact size: only about 110 x 35 x 35mm
  • Developer-friendly: OpenNI 2.2 compatible and supports multiple operating systems

However, the camera is not suitable for Ubuntu 16.04 users, as it can only accept depth information and has problems connecting to rgb information. In Ubuntu 14.04, it also requires some extra work to use it.
See RGBD camera experiment for details on installation and use

1.2,Kinect v1

Kinect for Xbox 360, or Kinect for short, is a peripheral device developed by Microsoft and applied to the Xbox 360 console. It allows players to operate the Xbox 360's system interface using voice commands or gestures instead of holding or stepping on the controller. It also captures the player's full body movements and uses the player's body to play the game, bringing the player a "controller-free gaming and entertainment experience". The Kinect was launched on November 4, 2010 in the U.S. at an MSRP of $149. Eight million units were sold in the first 60 days of sales, and it has now applied for the King's World Record as the fastest-selling consumer electronics product in the world.

Because it is widely used, many developers have developed related drivers, so it will be much easier to use. Currently discontinued, but you can buy used on Taobao, and the price is cheaper, plus the adapter may also be a few hundred dollars. It is recommended that novices try the device first, and its ability to meet the basic functions.

Cons: It needs to use its own adapter to supply power on a 220v to 12v power converter that is not removable. If used on a mobile robot, the cord needs to be cut and powered using the 12v from the HandsFree power management module. If used only on the fixed end, no changes are needed.
I think the main reason is that Xtion and Kinect are for different people, Xtion is mainly for developers, according to the official statement, robotics design, security monitoring, automotive engineering or 3D scanning can be used. But Kinect is mainly for game development, so it can be fixed in one position. So if you move, Xtion 2 is preferred (Xtion Pro can also be used if you can buy it), then Kinect v1 is recommended, Kinect v2 is not recommended, because it is too heavy.

How to use:

  • First install the driver

` sudo apt-get install ros-indigo-freenect-* rospack profile


* Connect the camera and use

roslaunch freenect_launch freenect-registered-xyzrgb.launch



### 1.3,Xtion Pro

Xtion Pro is the world's first exclusive professional PC somatosensory software development solution, compatible with the OPNI NITE middleware SDK to make the development of somatosensory applications and games easy!   
But!!! Has been discontinued, basically can no longer buy, but this one is better and lighter weight. If you can buy it, it is still very good to use.

**How to use**:   

* First install the driver

sudo apt-get install ros-indigo-openni-camera sudo apt-get install ros-indigo-openni-launch sudo apt-get install ros-indigo-openni2-launch


* Connect the camera and use

roslaunch openni2_launch openni2.launch


### 1.4,Kinect v2

Kinect v2 is also relatively new, but it is also one of our least recommended. Because it is too heavy and huge, although not compared in the parameters listed earlier, that is because there is still a base for the other cameras and the camera is smaller. But this one is a big, four-square guy. And need a very heavy power adapter. If you move, first of all, its center of gravity can not be too high, and then also consider the power supply, this is the same as the first generation, also need to cut the power cord, connected to 12v. Not reversible.   
But because of the wide range of applications, so it is also more developer-friendly, but it is indeed a little too heavy. The price is around 1400, be sure to purchase the adapter kit when you buy.   

**How to use**:   

* First install the driver   

Using Kinect v2 on ROS requires a bit more steps, but it's not very troublesome, just follow [install](https://github.com/code-iai/iai_kinect2#install) all the way down on github. A brief translation.   
1, install ROS and configure the environment (already installed, skip)   
2, install libfreenect2, please refer to [specific installation steps](https://github.com/OpenKinect/libfreenect2/blob/master/README/inde.html#linux)   
3,Clone the repository to local and compile

cd ~/catkin_ws/src/ git clone https://github.com/code-iai/iai_kinect2.git cd iai_kinect2 rosdep install -r --from-paths . cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPE="Release"


* Connect the camera and use

roslaunch kinect2_bridge kinect2_bridge.launch



## 2.Lidar

This section introduces the 2D lidar supported by HandsFree, the main recommended ones are Hokuyo series, rplidar series, and an entry lidar EAI X4.   
A brief comparison is presented below.

| Properties |Hokuyo URG-04LX | rplidar A2 |rplidar A1 | EAI X4 |
| :------:|:-----:| :-----:|:-------:|:----------:|
| Distance(m) | 0.02-5.6 | 0.15-12 | 0.12-12| 0.12-10 |
| Angle(degree) | 240 | 360 | 360 | 360 | 360 |
| Angle Resolution (deg.) | 0.36 | 0.9 | <=1 | * |
| Frequency(Hz) | 10 | 5.5 | 5.5 | 6-12 |
| Error | 3%/m | 1%/m | 1%/m | * |
| Measurement frequency (times/s) | * | 8000 | 8000 | 5000|
| Price ($) | 6400 | 2800 | 500 | 500 |

If there are other requirements, please refer to [other available lidars](http://wiki.ros.org/Sensors#A2D_range_finders) on the ROS official website.

### 2.1,Hokuyo URG-04L/UTM-30Lx

>HOKUYO URG-04LX 2D laser scanning distance measurement product has 5.6m, 240° measuring range, DC5V input (powered by USB interface), 100ms scanning time, which can be used for robot obstacle avoidance and position recognition; high accuracy, high resolution, wide field of view design provides good environmental recognition capability for autonomous navigation robots; compact design saves installation space Low weight, low power consumption; unaffected by bright light, can also work in the dark; non-contact measurement.   

After all, the price is there, stability and accuracy are guaranteed. But the price is more expensive, it is recommended to use on Giraffe or Stone. Angle resolution of this parameter is still relatively accurate, the error, I personally feel that the domestic products are somewhat exaggerated.   
So if you have a large car or high accuracy requirements, this lidar is recommended.  

**How to use**:   

* Install the driver

sudo apt-get install ros-indigo-hokuyo-node


* Connect the lidar and use

roslaunch rplidar_ros hokuyo.launch

The above command requires the installation of the relevant software from handsfree   

### 2.2 rplidar A2

>When working, the ranging core of RPLIDAR A2 will rotate clockwise to achieve 360-degree scanning and ranging detection of the surrounding environment, thus obtaining a contour map of the surrounding environment.   
>Using custom-made special components, the carefully designed internal mechanical system ensures superior performance while the product is only 4 cm thick, making it suitable for all types of service robots.   
>Improved internal optics and algorithm system with sampling frequency up to 8000 times/sec, allowing robots to build maps more quickly and accurately.   
>The belt drive method in the first generation is abandoned, and the self-designed brushless motor is adopted, which greatly reduces the mechanical friction during operation and runs very smoothly with almost no noise.   
>With the brushless motor and optical magnetic fusion technology, it can reach a service life of more than 5 years under 7*24 hours continuous operation.   

Overall, A2 lidar is still outstanding, small and flexible, only connected to usb occasionally insufficient power supply, additional power supply will improve a lot. Recommended for use on stone and Giraffe.   

**How to use**:   

* Install the driver

If you have downloaded the handsfree code, you don't need to do this step.   
Refer to the official [driver installation instructions](https://github.com/robopeak/rplidar_ros#how-to-build-rplidar-ros-package) on github, download it to the src directory of a ros workspace, then catkin_make make.

* Connect the lidar and use

roslaunch rplidar_ros rplidar.launch


Note, try not to connect the lidar with passive usbhub, because the voltage and current may not reach the requirement, although it shows successful connection, but there will be no lidar information output.

### 2.3 rplidar A1

>RPLIDAR A1 adopts laser triangulation ranging technology with self-developed high-speed vision acquisition and processing mechanism, which can perform more than 8000 times per second ranging action.  
>Ranging core rotates clockwise to achieve 360-degree scanning and ranging detection of the surrounding environment, thus obtaining a contour map of the surrounding environment.   
>After connecting RPLIDAR to your computer via USB cable, it can be used directly without any coding work.   
>Integrated wireless power supply and optical communication technology, the original design of optical-magnetic fusion technology completely solves the problem of electrical connection failure and short LIDAR life due to physical contact wear.   

A1 lidar in late 2017 for equipment upgrades, performance has been substantially improved, but there is still a gap compared to A2. First of all, A2 is lower in height and more compact; secondly, the sound is smaller and also more beautiful; thirdly, the performance, distance, resolution accuracy and other aspects are a little better. But with A1 to get started or very good.

A1 and the following to introduce the EAI X4 are about 500 yuan, I recommend A1,A1 has been through the test of time, X4 is launched in November 2017. (But thanks to EAI)

**use the same method as A2**


### 2.4 EAI X4

EAI X4 was launched as the king of value for money, no one else. Because the A1 was sold at 999, and the measurement frequency was only 2000 times/second, far below the 5000 times/second X4 lidar. Thank you X4 because it makes all the lidar of Silan Technology reduce the price and upgrade.   
X4 lidar measurement distance can reach 10m, but the measurement frequency is lower than the upgraded A1, only 5000 times/s.
>Application scenarios are: autonomous map building, automatic obstacle avoidance, path planning, educational research, creator education and auxiliary positioning, etc.

**How to use**:   

* Install the driver   
According to the official [ros instructions for use](https://github.com/EAIBOT/ydlidar#how-to-build-ydlidar-ros-package), first download it to ros workspace, then compile it, and finally set up the device.

* Connect the device and use

roslaunch ydlidar lidar_view.launch ```

3,upper layer controller

Upper layer controller is a relative term for the underlying stm32 controller, mainly refers to the upper layer module that communicates with each sensor and the underlying layer, generally speaking, it is tk1/tx1 or IPC.
The tk1 and tx1 are based on the ARM architecture, parallel processing and deep learning may be used, and ROS also provides the relevant installation methods. But the mainstream is still based on x86 processors, and there are other factors (described below) that would still recommend the use of several IPCs from our recommended packages. The following is a brief introduction to these two.

3.1, industrial control machine

IPCs are generally based on the x86 architecture, and we recommend the following IPCs after conducting tests.

properties high medium low
CPU i7-7500U i3-6100U N2830
hard drive 240 solid state 60G 30G
memory 8G 4G 2G

First of all, we have tested these three IPCs, all with better performance, but we still need to choose according to different robots, such as mini we recommend using the low version, Stone we recommend using the medium and high version, Giraffe we recommend using the minimum high version, or choose a higher configuration of the IPC, such as the IPC with GTX 1060, Giraffe The load capacity and battery are better, so there are more options.

The above mentioned IPCs all use 12v power supply, and you can directly use the 12v interface on the power management system on the robot.

Specific installation and configuration steps can be found in how-to-use-mirror or configure environment

3.2,tk1/tx1

NVIDIA Tegra K1 is NVIDIA's first mobile processor with the same advanced features and architecture as a modern desktop GPU, while still using the low power consumption of a mobile chip.

As a result, the Tegra K1 allows embedded devices to use the exact same CUDA code that can also run on desktop GPUs (used by over 100,000 developers) with similar levels of GPU-accelerated performance as the desktop. In addition to the quad-core 2.3GHz ARM Cortex-A15 CPU and revolutionary Tegra K1 GPU, the Jetson TK1 board includes similar features to the Raspberry Pi, but also includes some PC-oriented features such as SATA, mini-PCIe and fans for continuous operation under heavy workloads.

NVIDIA Jetson TX1 is NVIDIA's second generation embedded platform developer kit, ideal for embedded solutions for smart drones and robots.
It comes with an NVIDIA Maxwell GPU with 256 CUDA cores, a 64-bit ARM A57 CPU, 4GB of LPDDR4 memory, 16GB of flash memory, Bluetooth, 802.11ac Wi-Fi module and Gigabit Ethernet card, and runs Linux for Tegra OS.

There are two main disadvantages of tk1, so we do not recommend it.

  • 1,easy to damage, the tk1 we bought used to have various problems
  • 2, has been discontinued, discontinued means that the future use of this for development less and less people

TX1 is also based on ARM architecture, if there is a demand for this can be used, if not very urgent, we still recommend the use of our test and provide industrial control machine, because some details in the ARM architecture on the development board is not very friendly.

Configuration steps please refer to TK1 brush tutorial Translated with www.DeepL.com/Translator (free version)

results matching ""

    No results matching ""