Voice patrol
Through the previous course explanation, we have understood the entire process of implementing voice custom interaction. Next we will learn how to control our robot to patrol through the voice interaction function. And our voice navigation is also a functional realization based on the custom interaction mode design explained earlier.
overall effect
After running the default demo, we can issue a command to the robot to start or end the patrol through voice, so as to achieve the effect of letting the robot run at the set point for patrolling.
Implementation process
Simulation use
First, we have to wake up the robot through the wake-up word to make it enter the wake-up state.
Therefore, the Voice offline command word detection
and VAD detection node
explained earlier need to be started first:
roslaunch handsfree_speech offline_interactive_ros.launch//Voice interactive function node
rosrun handsfree_speech vad_record.py//VAD detection node
Then run the voice patrol node
:
rosrun handsfree_speech multi_point_patrol.py//voice patrol node
According to the content of the first section of the voice tutorial, we can use the simulation environment to test to see the effect of voice interaction. Here we also provide the corresponding simulation environment:
roslaunch handsfree_stage demo_handsfree_room_stage.launch
roslaunch handsfree_stage demo_handsfree_xuda_stage.launch
These two are simulation environments. When the space is not empty enough or not yet skilled, the simulation test can be performed by executing any one of the instructions.
Through the simulation environment, we can use the function of voice interaction to view the execution effect and debug the voice function.
real machine use
After we are familiar with the voice function, we can start debugging the robot in the actual environment.
First start our Voice Interaction Node
and VAD Detection Node
:
roslaunch handsfree_speech offline_interactive_ros.launch//Voice interactive function node
rosrun handsfree_speech vad_record.py//vad detection node
Then run voice patrol node
:
rosrun handsfree_speech multi_point_patrol.py//voice patrol node
Since we are using it in an actual environment, we also need to start the driver node
:
roslaunch handsfree_hw handsfree_hw.launch
Open a second terminal and start laser node
:
roslaunch handsfree_lidar lidar.launch
Open a third terminal and start the move_base node
:
roslaunch handsfree_2dnav move_base_with_amcl.launch map_name:=my_map
Open a fourth terminal and start the RVIZ visualizer
:
rosrun rviz rviz -d `rospack find handsfree_2dnav`/rviz/navigation.rviz
Initial position calibration: You can calibrate the initial position through the 2D Pose Estimate
in the toolbar above the RVIZ visualization tool until the robot's position on the map is the same as the robot's actual position.
Once everything is in place, you can try out the Voice Patrol feature.
core control
XL_goal=0
XL_finish=0
rospy.init_node('smach_serial_demo2',log_level=rospy.DEBUG)
server_movebase = actionlib.SimpleActionClient('/move_base', move_base_msgs.msg.MoveBaseAction)
connect_state = server_movebase.wait_for_server()
points = [[11.3,50.1], # target point 1 only need x and y
[14.7,41.9], # target point 2
[16.9,18.2], # target point 3
[10.4,25.4]] # target point 4
You can set the patrol point of the robot in this part of the code.
-----------------------------------
def speech_callback(msg):
#rospy.loginfo(msg)
global XL_goal
global XL_finish
if msg.data == 'patrol':
XL_goal=1
elif msg.data == 'stoppatrol':
XL_goal=0
elif msg.data == 'shutdown':
XL_finish=1
----------------------------------
def execute(self, ud):
goal = gen_new_movebase_goal('map',
self.__target_x,
self.__target_y,
self.__next_target_x,
self.__next_target_y)
if XL_goal==1:
server_movebase.send_goal(goal)
# todo: what should robot do when failed to finish the target point?
is_server_availabel = server_movebase.wait_for_result()
if is_server_availabel is False:
return 'error_exit'
else:
if XL_finish==1:
return 'error_exit'
else:
return 'next_point'
----------------------------------
with smach_serial_demo2:
while XL_goal==0:
b=0
if XL_goal==1 and a==0:
a+1
# to point in points list into state machine
for target_point in range(0, size_points, 1):
next_target_point = (target_point+1) % size_points
name_target_point = 'POINT%d' % target_point
name_next_target_point = 'POINT%d' % next_target_point
smach.StateMachine.add(name_target_point,
MoveToPoint(target_x=points[target_point][0],
target_y=points[target_point][1],
next_target_x=points[next_target_point][0],
next_target_y=points[next_target_point][1]),
transitions={'next_point': name_next_target_point})
smach_serial_demo2.execute()
Modify the state of our SMACH state machine. When the start patrol command is issued, the SMACH state machine starts to work.
About the smach state machine
smach provides two major components, actionlib integration and smach viewer. The samch viewer can view the current state node position of the task execution in real time, so as to facilitate development and debugging. smach also integrates action states, such as fixed-point navigation, which can convert topics and services into states, and can also convert a class into a state node. Smach can also implement concurrent execution of states, repeated execution of tasks, and hierarchically nested complex state machines.
SMACH is beneficial to the concrete realization of controlling the complex movements of the robot. In SMACH, all possible states and state transitions can be specified define. SMACH has the following characteristics:
Rapid Prototyping: Utilize Python-based state machine syntax to quickly build executable state machines. Complex State Machines: SMACH supports the design, maintenance and debugging of large and complex hierarchical state machines. Introspection: SMACH can create introspection procedures for state machines, including initialization of state transitions, user data, and other elements. Since SMACH aims to provide a task-level state machine description, it is not suitable for unstructured tasks and underlying systems that require high execution efficiency.
Here, it is always judging whether the current state has reached a new node. If so, the robot will perform the task of the next action state and start the patrol cycle.
When the end patrol command is issued, the SMACH state machine will judge the state according to the conditions we designed. When the conditions for the end of the patrol are met, when our state machine reaches a new state, it will not send the next node because the conditions are not met. The information is given to /move_base
, so as to realize the state of stopping the patrol.
When the exit voice
command is issued and the flag bit information of the robot completing the task is satisfied, it will switch the state of SMACH itself to the end state and close the related process.
The following is an error code information query of iFLYTEK. If the relevant error code is reported when calling iFLYTEK, you can query through this link. Inquiry about iFLYTEK error code information