18-642 Project 8

Page last updated 10/13/2018 (changelog)

This project has you unit-test your turtle's statechart. You will get experience using CUnit assertions and writing mock functions for unit tests. You must achieve 100% coverage on the transitions of the statechart, and achieve 100% branch coverage. You will also compile your code with warnings, which you will have to take care of in the next project.

We strongly recommend using CUnit if you want to do embedded software in industry, because it is much more representative of test frameworks we've seen in industry. However, if you really want to use another unit testing framework such as Google Test that is OK.


Lab Files:


Hints/Helpful Links:


Procedure:

  1. Compile your code checking for warnings:
    1. Modifty line 67 of $ECE642RTLE_DIR/CMakeLists.txt from set(proj8_flags false) to set(proj8_flags true). This turns the following compiler warnings into errors: -Werror -Wextra -Wall -Wfloat-equal -Wconversion -Wparentheses -pedantic -Wunused-parameter -Wunused-variable -Wreturn-type -Wunused-function -Wredundant-decls -Wreturn-type -Wunused-value -Wswitch-default -Wuninitialized -O1 -Winit-self
    2. Study the following lines in ece642rtle/CMakeLists.txt:
      # Warning flags for Projects 8,9,10
      target_compile_options(ece642rtle_student PUBLIC
        "-Werror" # Do not comment this out after Project 8!
        "-Wextra"
        "-Wall"
        ...
      )

      These lines turn on the corresponding warning flags. If you would like to suppress a warning flag (to cut down on noise while dealing with other warnings), simply comment out the line with a "#."
    3. The "-Werror" flag turns all warnings into errors, which ensures your project will always re-compile if you previously had warnings, even if you have not changed any files. If you need to check if your code compiles and runs before you have fixed all of the warnings, you may comment the "-Werror" flag out. Just be sure that it's not commented out when you are counting your warnings and submitting code for future projects.
    4. Build your code using the command catkin_make ece642rtle_student and take note of how many warnings (displayed as errors) you see. You are not required to fix any of the warnings, but will be reqruied to fix all the warnings in the next project.
  2. Peer Review: Perform a peer review on your code. You only need to review the code. However you'll also need the detailed design so you can compare the code against the detailed design.
    1. Your groups are assigned on canvas. We strongly recommend you conduct the peer review BEFORE unit test, even if it results in more bugs being recorded. You're not being judged by number of bugs reported. (Except, if you have zero bugs in peer review probably something went wrong with the peer review.) However, it is not our intention to mess up your schedule by forbidding unit test if your group has trouble scheduling a meeting time. Schedule the peer review as early as you can, and do something reasonable managing peer review vs. unit test timing.
    2. Use the peer review checlist here: https://betterembsw.blogspot.com/2018/01/new-peer-review-checklist-for-embedded.html
      • Item #0 can be recorded as a defect during the review and that is OK since we don't expect you to fix warnings in this project. Just put status as "next project" if you have warnings. We do NOT want you to delay the review to the last minute because of this item.
      • Item #2 (follows style guidelines) is referring to reasonable conformance to the project #3 checklist. If there is a perceived conflict between that and the peer review checklist, the peer review checklist should win.
      • Item #23 (Does the code match the detailed design) means in this case does the code match the statechart and any algorithm design in the detailed design
    3. As in previous reviews, assign a scribe and a leader for each review. Allocate 30 minutes per review. The person whose artifacts are under review should be the scribe. By the end of all reviews in your group, everyone should have taken a turn being the leader.
    4. Remember the peer review guidelines: inspect the item, not the author; don't get defensive; find but don't fix problems; limit review sessions to two hours; keep a reasonable pace; and avoid "religious" debates on style.
    5. Fill out the usual peer review recording spreadsheet (or something similar) for each review. Fix any defects you can before handing in the checkpoint. Defer any major rework by writing "deferred" or similar wording to the review issue log spreadsheet as appropriate.
  3. Read over the CUnit documentation:
    1. Writing CUnit Test Cases
    2. Managing tests and suites (this is good to know at a high level, but do not get bogged down in the details -- the example code will have all that you need)
    3. Example code
  4. Build, run, and study the example system and its unit tests:
    1. The example code is in $ECE642RTLE_DIR/cUnit_example.
    2. Study dummy_turtle.cpp. This code implements the following statechart:
    3. Note that dummy_turtle.cpp calls two functions ( set_orientation() and will_bump()) that we expect would be implemented in some file like student_maze. However, in order to unit test dummy_turtle, we need a way to mock up these functions in our unit testing framework. We do this by replacing the header file we use while testing:
      #ifdef testing
      #include "student_mock.h"
      #endif
      #ifndef testing
      #include "student.h"
      #include "ros/ros.h"
      #endif

      If testing is #define'd (we do this in the g++ command below), the file uses "student_mock.h". If the line were not present, the file would use "student.h" and "ros/ros.h" as usual.
    4. The mock functions are implemented in mock_functions.cpp. Study how mock_orientation and mock_bump are used to mock up the orientation output and will_bump input variable of the state chart.
    5. Take a look at how Transitions T1 and T2 are tested in student_test.cpp (lines 11-17, 19-26). In particular, note that the at_end input variable is passed as an input to moveTurtle. Your own code might pass input variables as inputs to a function, or use methods (such as bumped()) to fetch them, and the example shows both ways of handling these cases.
    6. Note that test_t1 and test_t2 test the output (set by the starting state) and the resultant state. Your tests shall do the same.
    7. Build the example:
      g++ -Dtesting -o student_tests student_test.cpp dummy_turtle.cpp mock_functions.cpp -lcunit
      The -Dtesting flag functions the same as #define testing would in the file, and -lcunit tells the compiler to link the CUnit library.
    8. Run the example (./student_tests)and observe the output. It should match the following:
    9. Change something (such as the output orientation in S1) in dummy_turtle.cpp to make a test fail. Verify that the test fails by building and running the example again.
  5. Use CUnit to test your own student_turtle state chart implementation.
    1. Include the same #ifdef testing blocks as in dummy_turtle.cpp at the top of student_turtle.cpp. As in the example, implement mocks of any student_maze functions you call.
    2. You can also use the #ifdef testing block to remove things such as the static keyword so that you can instrument the code for testing while keeping static properties when testing is not #define 'd. The idea is that you should be able to use the same student_turtle.cpp in your regular ece642rtle build and your unit test build without modifying the file between builds.
    3. Make sure your student_turtle state chart implementation is easy to instrument for testing -- we recommend moving your main state chart logic to a routine that takes the current state as one of the inputs and returns the next state, like in dummy_turtle.cpp.
    4. You may need to make new getter/setter methods to mock up any data structures you have (for example, instead of accessing a visit counts array directly, you may need to create a method called visit_array_at(int,int) ). In some cases the smartest approach will be to modify your code to make it easier to test rather than deal with an overly difficult mock up. That tradeoff happens in industry projects too. If there is an aspect of your code that is particularly difficult to mock up in this way, please talk to us in office hours.
    5. Set up the CUnit framework as in the example, and write unit tests for your code. You are requested to provide a script to build and run your tests: AndrewID_build_run_tests.sh. The architecture of the the testing files up to you, but remember that files must start with your AndrewID.
    6. You should use the g++ flag -Dtesting to make testing defined. (This compiler directive defines "testing" just as if a #define had been place in a source code file.) In a Makefile, you can use CPPFLAGS +=-Dtesting. You should not have #define testing anywhere in your code and only enable it in the build commands, so that we can grade your project seamlessly.
    7. Your unit tests shall have the following requirements:
      1. 100% transition coverage: Test every transition in the statechart. Write tests according to your state chart diagram and requirements documentation (not your code)-- in peer review next week, your peers will verify that your tests correspond to your diagram. Writing tests based on documentation and running them on your code is a way of verifying if your implementation matches your documentation. Annotate code to statechart traceability by including comments in your unit test code that map to transitions (for example, "// Tests T1"). It is fine for one test to map to more than one transition, but every transition must be tested and annotated in the code.
      2. Up to 100% data coverage: If your transition tests do not cover all combinations of input variables, write 8 additional tests (if fewer than 8 tests are needed to achieve 100% data coverage, write up to the number of tests necessary to achieve that coverage) to test these combinations (for example, in the dummy example, state==s1, at_end==false and will_bump==false is a possible combination).
      3. 100% branch coverage: Write any additional tests to achieve 100% branch coverage in your state chart handling code (including subroutines). This means you have to figure out how to exercise any default: switch statements. Note that this is simple "branch coverage" and not MCDC coverage. We recommend, but do not require, 100% MCDC coverage.
    8. Build and run your tests.
    9. Spend at least one hour fixing any failing unit tests (it is OK to spend less than one hour if all tests are fixed). Take note of any tests you did not fix. You will have pass all your unit tests by the next Project.
    10. Re-compile the main project after writing and fixing any unit tests you had time to fix. How many warnings did you get?
      1. The primary purpose of this step is to make sure you know what you have to fix by the end of the next project. You will have to fix all warnings by the next Project.
  6. Answer the following questions in a writeup:
    1. How many warnings did you get in Step 1? Which one (if any) was the most surprising?
    2. Briefly explain how to run your tests using your ANDREWID_build_run_test.sh script so that the TAs can do it without having to do reverse engineering on your submission.
    3. Make an argument that you achieved 100% transition coverage, 100% (or approaching) data coverage, and 100% branch coverage. You do not have to describe every obvious test case, but talk about how you handled any special cases and your general strategy.
    4. Did you have any failing tests? Why did they fail? For the ones you fixed, how was the experience of fixing them?
    5. List any tests that are failing at the time of turn-in. Briefly give a plan for fixing them for the next project. The TA will compare these against warnings generated with a TA compiler run.
    6. How many warnings did you get at the end of Step 5? Why did this number increase/decrease/stay the same?
    7. Include your statechart, including any updates you made for this project. It should match the transition tests you wrote. Your statechart should reflect the code you submit. We strongly suggest you update other parts of your design, but you do not have to turn them in.
    8. Include the peer review spreadsheet for the peer review done on YOUR code. (Do not submit peer reviews you participated in for other student code; each student submits only one peer review sheet.)
    9. Did you do peer review on your code before/after/during unit test? Any lessons learned or thoughts on how that turned out?
    10. Do you have any feedback about this project? Include your name at start of writeup. (If they are printed out, the file name info is lost.)
  7. Note: The next Project (a one-week project) will have you fix all the warnings and failing unit tests you encountered in this Project, and will also have you implement an invariant. Since there is some slack built in to this project that carries over to the next Project, plan your time accordingly.

Handin checklist:

Hand in the following:

  1. Your ece642rtle directory, called p08_ece642rtle_[AndrewID]_[FamilyName]_[GivenName].zip (.tar.gz is also accepted).
  2. All files you need to unit test your state chart, including ANDREWID_student_turtle.cpp, ANDREWID_build_run_tests.sh, the file that sets and runs your unit tests, your mock function file, and any header files, Makefiles, and/or other files necessary to build and run your tests. Make sure all files (except possibly the Makefile) start with ANDREWID_.
  3. Your writeup, called p08_writeup_[AndrewID]_[FamilyName]_[GivenName].pdf the elements of the writeup should be in a single .pdf file. Please don't make the TAs sift through a directory full of separate image and text files.

Zip the three files and submit them as Project08_[Andrew ID]_[Family name]_[First name].zip.
The rubric for the project is found here.


Changelog: