Get the basic data of the robot

This section will introduce how to obtain the basic data of the robot by writing python code. We will demonstrate through the following four points:

  • Obtain the basic parameters of the robot and print them out
  • Obtain IMU data and convert it into Euler angles and print it out
  • Get radar data and print it out
  • Obtain camera data and display the image through python's cv library

1 Obtain the basic information of the underlying hardware of the robot and print it out

The basic information of the underlying hardware of the robot refers to the embedded on-chip clock, embedded cpu temperature, battery voltage and remaining battery power, etc. Just open the handsfree_hw node of the robot, and the basic parameters of the robot will be published to the topic "/handsfree/robot_state". We only need to subscribe to the topic in the program or terminal, and then these data can be obtained as the state data of the robot.

  1. Open the terminal and start the robot driver node

     roslaunch handsfree_hw handsfree_hw.launch
    

    Normally, it will display: handsfree_hw

  2. Open a second terminal to view the current topic

     rostopic list
    

    Under normal circumstances, this information will be printed on the terminal, where /handsfree/robot_state is the topic we want to receive rostopic-list-hw

    You can also go to the handsfree_msgs/msg folder, check robot_state.msg to see what other messages

  3. Open the third terminal and run the python file

     rosrun handsfree_tutorials get_hardware_info.py
    

    Under normal circumstances, the data will be printed out: py-get-hardware-info

    Since the data will be printed all the time, you need to stop printing by pressing Ctrl+C

  4. Code analysis

    We can simply look at the code, the code is in

    /home/handsfree/handsfree/handsfree_ros_ws/src/handsfree/handsfree_tutorials/script/1_get_sensors directory, named get_hardware_info.py

    see also below

    ```

    !/usr/bin/env

    coding=UTF-8

import rospy from handsfree_msgs.msg import robot_state

def callback(data): #callback function rospy.loginfo("the embedded system_time: %fus",data.system_time) #lower computer system time rospy.loginfo("the embedded cpu temperature is: %f", data.cpu_temperature) #cpu temperature rospy.loginfo("the battery voltage is: %f",data.battery_voltage) #battery voltage rospy.loginfo("the battery power remain is: percent %f",data.power_remain*100) #battery power remaining rospy.loginfo("———————————————————————————————————————————\n\r")

def sub_state(): rospy.init_node('sub_state', anonymous=True) rospy.Subscriber("/handsfree/robot_state", robot_state, callback) #The name of the topic you want to subscribe to rospy. spin()

if name == 'main': sub_state()

```

​ It can be found that this code is modeled after the ros tutorial: [Write a simple receiver] (http://wiki.ros.org/cn/ROS/Tutorials/WritingPublisherSubscriber(python)/)

​ Receiving messages is actually done in `rospy.Subscriber`:

  + The first parameter writes the topic name you want to receive.
  + The second parameter is the topic type you want to receive, I defined it in `from handsfree_msgs.msg import robot_state`.
  + The final parameter is a callback function. The so-called callback function actually means that once the message is received, it will jump to the callback function of the message for corresponding processing. The rospy.loginfo function run in the callback function prints the obtained message on the corresponding terminal.

2 Get the IMU data and convert it into Euler angles and print it out

The imu data refers to the inertial measurement unit. This ros has already completed the msg for us as Imu.msg, which includes the current position, Angular velocity and linear acceleration, and their covariance. And we have already packaged it, started the handsfree_hw node, and can send it out through the topic (I just output the Euler angle of the orientation in the following tutorial).

  1. Open the terminal and open the robot abstraction node (if it was opened before, there is no need to open it again)

     roslaunch handsfree_hw handsfree_hw.launch
    

    Normally, it will display: handsfree_hw

  2. Open a second terminal to view the current topic

     rostopic list
    

    Under normal circumstances, this information will be printed on the terminal, where /handsfree/imu is the topic we want to receive rostopic-list-hw

  3. Open the third terminal and run the python file

     rosrun handsfree_tutorials get_imu.py
    

    Under normal circumstances, the data will be printed out: get-imu

    Since the data will be printed all the time, you need to stop printing with Ctrl+C

  4. Code analysis

    We can simply look at the code, the code is in

    /home/handsfree/handsfree/handsfree_ros_ws/src/handsfree/handsfree_tutorials/script/1_get_sensors directory, named get_imu.py

    see also below

    ```

    !/usr/bin/env

    coding=UTF-8

import rospy import tf from tf. transformations import * from sensor_msgs.msg import Imu

def callback(data):

This function is in tf, which can convert quaternion into Euler angle

(r,p,y) = tf.transformations.euler_from_quaternion((data.orientation.x,data.orientation.y,data.orientation.z,data.orientation.w))

Because it is a radian system, it looks more convenient to change it to an angle system below

rospy.loginfo("Pitch = %f, Roll = %f, Yaw = %f", r180/3.1415926, p180/3.1415926, y*180/3.1415926)

def get_imu(): rospy.init_node('get_imu', anonymous=True) rospy.Subscriber("/handsfree/imu", Imu, callback) #Accept topic name rospy. spin()

if name == 'main': get_imu()

```

It can be found that this code is modeled after the ros tutorial: [Write a simple receiver] (http://wiki.ros.org/cn/ROS/Tutorials/WritingPublisherSubscriber(python)/)

Here we mainly talk about the `tf.transformations.euler_from_quaternion` function. Its purpose is to convert quaternions into Euler angles to make the output more intuitive. The parameters are very simple, input four values of quaternion and output three values of Euler angle.

3 Get the lidar data and print it out

Lidar data means that after the radar detects the position of the target by emitting a laser beam, the received signal is reflected back from the target point (target echo), and then compared with the transmitted signal. After proper processing, the relevant information of the target can be obtained. Generally, the driver of the lidar will package it and send it through the topic.

  1. Open the terminal and start the lidar node

     roslaunch handsfree_lidar lidar.launch
    

    Normally, it will display: run-rplidar

  2. Open a second terminal to view the current topic

     rostopic list
    

    Under normal circumstances, this information will be printed on the terminal, where /scan is the topic we want to receive rplidar-topic

    This message data structure is officially provided by rosLaserScan.msg

  3. Open the third terminal and run the python file

     rosrun handsfree_tutorials get_lidar_scan.py
    

    Under normal circumstances, the data will be printed out: py-get-lidar-scan

    Since the data will be printed all the time, you need to stop printing with Ctrl+C

  4. Code analysis

    We can simply look at the code, the code is in

    /home/handsfree/handsfree/handsfree_ros_ws/src/handsfree/handsfree_tutorials/script/1_get_sensors directory, named get_lidar_scan.py

    see also below

    ```

    !/usr/bin/env

    coding=UTF-8

import rospy from sensor_msgs .msg import LaserScan

def callback(data): rospy.loginfo("the angle_min is %f",data.angle_min) #print some information rospy.loginfo("the angle_max is %f",data.angle_max) rospy.loginfo("the scan data is ") for i in range(0, len(data. ranges)): print data.ranges[i], print "\n"

def sub_state(): rospy.init_node('sub_scan', anonymous=True) rospy.Subscriber("/scan", LaserScan, callback) #Accept topic rospy. spin()

if name == 'main': sub_state()

```

This code is relatively simple, and it is very similar to `get_hardware_info.py`, so I won’t go into details.

4 Get the camera data and display the image data through python's cv library

The camera data is the image, which refers to the color image and depth image obtained by the camera, etc., which are generally packaged in the camera driver and only need to be called directly.

  1. Open the terminal and start the camera node

     roslaunch handsfree_camera camera.launch
    

    Normally, it will display: xtion

    You can also choose this command to open the RViz visualization interface while opening the camera node (you can choose one of the two)

     roslaunch handsfree_camera view_camera.launch
    

    Normally, it will display: view-xtion

  2. Open a second terminal to view the current topic

     rostopic list
    

    Under normal circumstances, this information will be printed on the terminal, where /camera/rgb/image_raw is the topic we want to receive camera-topic-list

    rgb is a normal color image, depth is a depth image

    The internal data of this message is officially provided by rosImage.msg

  3. Open the third terminal and run the python file

     rosrun handsfree_tutorials get_photo.py
    

    Under normal circumstances, the image and video stream will be loaded: img-window

  4. Code analysis

    We can simply look at the code, the code is in

    /home/handsfree/handsfree/handsfree_ros_ws/src/handsfree/handsfree_tutorials/script/1_get_sensors directory, named get_photo.py

    see also below

    ```

    !/usr/bin/env

    coding=UTF-8

import cv2 import rospy from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError #ros to opencv need to use this cv bridge

def callback(data): try: cv_image = CvBridge().imgmsg_to_cv2(data, "bgr8") #Use cv_bridge to convert it to mat type except CvBridgeError as e: print(e) (rows,cols,channels) = cv_image.shape cv2.imshow("Image window", cv_image) #Display the image cv2.waitKey(30) #Play the next frame after 30ms

def get_photo(): rospy.init_node('get_photo', anonymous=True) rospy.Subscriber("/camera/rgb/image_raw", Image, callback) #Accept topic name rospy. spin()

if name == 'main': get_photo() ```

Here I will focus on cvbridge, because we want to use opencv to display it, so we need to convert the ros information into opencv information, and this requires this cvbridge to connect the two ends.

Use this function `cv_image = CvBridge().imgmsg_to_cv2(data,"bgr8")` to convert it into an object that can be manipulated by cv, and then use `imshow` to output it.

results matching ""

    No results matching ""