18-642 Project 9

Page last updated 11/9/2018 (changelog)

In this project, you will explore the runtime monitor interface and write your own monitor to check for a specific invariant violation. You will also take care of all warnings and failing unit tests from Project 8 and complete a peer review for your unit tests. This project will prepare you for the upcoming and final Project 10, which will have you write more invariants and run acceptance tests.

We've been telling you all semester that there are some invariants we expect your turtle to observe, such as not running through walls. This is the first step in checking and enforcing those invariants. Project 10 will continue with more invariant checking.


Lab Files:


Hints/Helpful Links:


Procedure:

  1. Fix any remaining warnings from static analysis left over from Project 8. Some reminders:
    1. The following compiler warning flags are now turned on: -Werror -Wextra -Wall -Wfloat-equal -Wconversion -Wparentheses -pedantic -Wunused-parameter -Wunused-variable -Wreturn-type -Wunused-function -Wredundant-decls -Wreturn-type -Wunused-value -Wswitch-default -Wuninitialized -O1 -Winit-self
    2. You may suppress any warning flag by commenting out the corresponding line with a "#."
    3. Do not comment out the "-Werror" flag. This turns all warnings into errors, which ensures the compilation will not complete until you have fixed all of the warnings.
      Note: If you absolutely need to turn off -Werror, remember that files will not re-compile unless they have been changed. This means that if file.cpp compiles error-free but with 2 warnings, and you do not change file.cpp before compiling again, file.cpp will not be included in the compilation and those two warnings will not show up. This means you might accidentally miss warnings -- turn -Werror on again before submitting the code to make sure you did not miss anything.
    4. Remember that the command catkin_make ece642rtle_student is all you need to build your code. You do not need to use build_run_turtle.sh if you do not intend to run the code.
  2. Fix any failing unit tests left over from Project 8. Some reminders:
    1. Make sure to have 100% transition coverage, 100% branch coverage, and 8 (or fewer, if 100% data coverage is achieved) additional tests for data coverage.
    2. If your state chart has changed, you must update your unit tests.
    3. For any transition in your state chart, put a comment in your unit tests where you test that transition. For example, "// Tests T1"
    4. You can build and run unit tests independent of ROS, so you can choose to fix unit tests before you fix warnings or vice versa.
  3. Perform a peer review for your unit tests:
    1. Your groups are assigned on canvas
    2. BEFORE the review, create a brief checklist of items to look for in this review. They should include items specifically relevant to unit tests.
      • The emphasis should be on code coverage of the code being tested.
      • You should decide how much code style to include in the review checklist for the unit test code itself. (The code being tested has already been peer reviewed -- the emphasis is on the code created to run the unit test in this review.)
      • You can manage this however you like, including having each group member proposing a specific part of a checklist, merging proposals from the team members to create a unified checklist, or having each team member reviewed according to a checklist they propose (i.e., use the author's checklist). A good checklist should have perhaps 12-25 items on it covering all the elements that are to be reviewed.
      • You'll individually hand in whatever checklist was used to review your code regardless of who wrote it.
      • You are responsible for ensuring that the checklist used for your codie (i.e., the one you hand in) is reasonable for the purposes.
      • You can update or substitute checklists before you are reviewed if you think the group is going to use an inadequate checklist.
    3. As in previous reviews, assign a scribe and a leader for each review. Allocate 30 minutes per review. The person whose artifacts are under review should be the scribe.
    4. By the end of all reviews in your group, everyone should have taken a turn being the leader.
    5. Remember the peer review guidelines: inspect the item, not the author; don't get defensive; find but don't fix problems; limit review sessions to two hours; keep a reasonable pace; and avoid "religious" debates on style.
    6. Fill out the usual peer review recording spreadsheet (or something similar) for each review. Fix any defects you can before handing in the checkpoint. Defer any major rework by writing "deferred" or similar wording to the review issue log spreadsheet as appropriate. To make sure this is completely clear, this review DOES NOT INVOLVE YOUR TURTLE SOURCE CODE AT ALL -- it is a review of the UNIT TEST coverage, pass/fail checks, and related unit test code you have written.
    7. Fix any issues from peer review and make sure all unit tests pass. Note that the rubric specifically checks that you fixed issues from peer review, so do not blow this off.
  4. Study the runtime monitor interface:
    1. All code is in $ECE642RTLE_DIR/monitors/.
    2. A runtime monitor is a separate ROS node that eavesdrops on all the ROS messages sent and received by the turtle. It can keep track of some sort of stateful turtle behavior by observing the messages. (Alternative Justin Ray phrasing from 18-649: "Because your monitor uses event-triggered semantics, if you need to check for a sequence of events, you may need to create some state variables to keep track of events of interest that occurred in the past")
    3. Every time a message is sent or received by the turtle, a runtime monitor receives an interrupt with the contents of this message and must handle the interrupt. Study $ECE642RTLE_DIR/monitors/monitor_interface.h to see function declarations for these interrupt handlers. Any monitor you write must implement these functions.
    4. A simple logging monitor is implemented in $ECE642RTLE_DIR/monitors/logging_monitor.cpp. It does not check for any invariants: it just takes any message it sees and prints out the information.
    5. $ECE642RTLE_DIR/monitors/step_monitor.cpp implements a monitor that checks for the following violation: given a turtle that has gone between squares A and B, an invariant violation occurs if the Manhattan Distance between A and B is greater than 1. That is, the turtle shall only move between adjacent squares. Things to note are the previous Pose stored in memory, the use of ROS_WARN to display an invariant violation, and the unimplemented functions that are not needed to be complete for this particular invariant but are needed for the code to compile.
    6. Build the monitors using catkin_make ece642rtle_logging_monitor (or step_monitor).
    7. In your catkin workspace (proj09_ws or similar), run source devel/setup.bash and then rosrun ece642rtle ece642rtle_logging_monitor to run the monitor.
      It will display [ERROR] [1522615024.818666824]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying... while the turtle is not running, which is OK. The node will start running normally when it can connect with the other nodes.
    8. In another terminal, run ./build_run_turtle.sh as usual. Then observe the monitor output in the first terminal.
    9. Once you get a handle on this workflow, you can run ./run_642_monitors ece642rtle_step_monitor. This takes one or more monitors as arguments, prints their output to a terminal, and produces a file called VIOLATIONS.txt that prints out the invariant monitor violations with five messages of context before and after.
    10. If the step_monitor does not show any invariant violations, try changing the code so that an obviously wrong invariant is violated. For example, change line 31 of the code to treat any movement in the y- direction to be an invariant violation. Re-build and re-run everything to see the violation in action!
  5. Write your own monitor that makes sure that the turtle turns no more than 90 degrees at a time:
    1. Use the provided $ECE642RTLE_DIR/monitors/logging_monitor.cpp and $ECE642RTLE_DIR/monitors/step_monitor.ccp as examples of how to write the interrupt handlers.
    2. Name your monitor ANDREWID_turn_monitor.cpp. Make the following addition (bold) to lines 50-53 of CMakeLists.txt:
      # Build the monitors: each target must be in quotes.
      # First item is the target name, following items are the source files
      # Implicitly all of these are built with monitors/ros_monitor_interface.cpp
      set(monitor_TARGETS_SRCS
       "ece642rtle_logging_monitor monitors/logging_monitor.cpp"
       "ece642rtle_step_monitor monitors/step_monitor.cpp"
       "ece642rtle_turn_monitor monitors/ANDREWID_turn_monitor.cpp"
      )

      If you include a source file (for example, factored out utility functions), you can include it in the same line, as ece642rtle_turn_monitor monitors/ANDREWID_turn_monitor.cpp monitors/ANDREWID_monitor_utils.cpp.
    3. A more precise way to describe this invariant: If there have been two subsequent calls to orientationInterrupt with directions A and then B, A and B shall not differ by more than a quarter turn. For example, EAST to NORTH is acceptable, but EAST to WEST is not. "Subsequent" in this case applies only to orientationInterrupt (so, there can be calls to other interrupts, such as poseInterrupt or bumpInterrupt, between the calls to orientationInterupt, and they are are still considered "subsequent" as long as there is no call to orientationInterrupt between them).
    4. Make sure your monitor code follows good coding practices. A small look-up table is OK (but not required, if you have another implementation in mind).
    5. Your code shall use the ROS_WARN function to indicate an invariant violation. You should use ROS_INFO to print out any relevant information for your monitor to display. See $ECE642RTLE_DIR/monitors/step_monitor.ccp as an example.
    6. Build your code using catkin_make ece642rtle_turn_monitor and run it by typing source devel/setup.bash from your workspace directory, and then running ./run_642_monitors ece642rtle_turn_monitor (recommended), or rosrun ece642rtle ece642rtle_turn_monitor. You can then use ./build_run_turtle.sh in a new terminal to run your turtle, and observe the output of the monitor in the first terminal.
    7. Take note of any invariant violations. Are they due to bugs in your monitor implementation, or bugs in your turtle implementation? If the former, fix your monitor implementation until you are confident that it is correct. If the latter, spend at least an hour (or less, if the invariant is fixed by then) to fix the invariant violations. In rare cases, ROS timing issues might give you false positives if messages arrive out of order. If you see these, use the message timestamps in the log to argue that the violation is a false positive in the write-up. For this project (Project 9), you are graded only on the correctness of your monitor (not the lack of violations), but a small fraction of the Project 10 points will be for not having invariant violations. You are graded only on the correctness of your monitor in Projects 9 and 10, but in Project 10, you receive credit for running your turtle code without invariant violations.
  6. Answer the following questions in a writeup:
    1. Your name. (If they are printed out, the file name info is lost.)
    2. Q1. Give the command(s) to build and run your project, including all required warnings turned on, and also to run your unit tests. (If you need to disable warnings to get the project to build then do that, but tell us which ones in a later question.)
    3. Q2. Were any warnings difficult or tricky to deal with? Which ones? Are there any warnings left unresolved? Are there any warnings turned off for your build process?
    4. Q3. Are there any unit tests that still don't pass? Which ones, and why?
    5. Q4. What were the most important things that peer review caught in this process (list the handful that strike you as most relevant or useful).
    6. Q5. Were there any problems with your peer review group?
    7. Q6. Describe any tricky/subtle points in your invariant monitor implementation.
    8. Q7. If you had invariant violations, were they due to issues in your monitor code, your turtle code, or both? How did you go about fixing them? If you believe a violation is due to ROS timing issues, make the argument here.
    9. Q8. Include your statechart, including any updates you made for Project 9. It should match the transition tests you wrote.
    10. Q9. Any problems or things you'd change about this assignment?
    11. Q10. Attach a copy of the peer review checklist used for your peer review.
    12. Q11. Attach a copy of the peer review issue log for your peer review.

Note: Project 10 (the final project) will have you write more invariants (such as "must face the line segment being checked for bump" and "does not go through walls") and solve all the given mazes as well as some new mazes. You will also be expected to have up-to-date documentation traceability, and your unit tests must pass and fulfill the coverage criteria. So if you've been slacking on meeting the invariant requirements, now you have some warning that they are about to be enforced.

Note: Your grading TA might request an in-person meeting if there are problems building and checking your code base.


Handin checklist:

Hand in the following:

  1. Your ece642rtle directory, called p09_ece642rtle_[AndrewID]_[FamilyName]_[GivenName].zip (.tar.gz is also accepted). Make sure all warning flags are enabled.
  2. All files you need to unit test your state chart(s), including ANDREWID_student_turtle.cpp, ANDREWID_build_run_tests.sh, the file that sets and runs your unit tests, your mock function file, and any header files, Makefiles, and/or other files necessary to build and run your tests. Make sure all files (except possibly the Makefile) start with ANDREWID_.
  3. Your writeup, called p09_writeup_[AndrewID]_[FamilyName]_[GivenName].pdf All the elements of the writeup shuld be in a single acrobat file, including the peer review materials. Please don't make the TAs sift through a directory full of separate image and text files.

Zip the three files and submit them as Project09_[Andrew ID]_[Family name]_[First name].zip.

The rubric for the project is found here.


Changelog: