18-642 Project 8
Page last updated 10/19/2019 (changelog)
This project has you unit-test your turtle's statechart. You will get
experience using CUnit assertions and writing mock functions for unit tests.
You must achieve 100% coverage on the transitions of the statechart, and
achieve 100% branch coverage. You will also compile your code with warnings,
which you will have to take care of in the next project.
We strongly recommend using CUnit if you want to do embedded software
in industry, because it is much more representative of test frameworks we've
seen in industry. However, if you really want to use another unit testing
framework such as Google Test that is OK, but no support will be
provided for Google Test.
- See recitation slides
- Please follow the handin naming convention: every filename (before the
file extension) must end in [AndrewID]_[FamilyName]_[GivenName].
- In this and future projects, $ECE642RTLE_DIR refers to
~/catkin_ws/src/ece642rtle (or corresponding project workspace).
- Compile your code checking for warnings:
- Modifty line 67 of $ECE642RTLE_DIR/CMakeLists.txt from
set(proj8_flags false) to set(proj8_flags true). This turns
the following compiler warnings into errors: -Werror -Wextra -Wall
-Wfloat-equal -Wconversion -Wparentheses -pedantic -Wunused-parameter
-Wunused-variable -Wreturn-type -Wunused-function -Wredundant-decls
-Wreturn-type -Wunused-value -Wswitch-default -Wuninitialized -O1 -Winit-self
- Study the following lines in ece642rtle/CMakeLists.txt:
# Warning flags for Projects 8,9,10
"-Werror" # Do not comment this out after Project 8!
These lines turn on the corresponding warning flags. If you would like to
suppress a warning flag (to cut down on noise while dealing with other
warnings), simply comment out the line with a "#."
- The "-Werror" flag turns all warnings into errors, which ensures
your project will always re-compile if you previously had warnings, even if you
have not changed any files. If you need to check if your code compiles and runs
before you have fixed all of the warnings, you may comment the
"-Werror" flag out. Just be sure that it's not commented out when you
are counting your warnings and submitting code for future projects.
- Build your code using the command catkin_make ece642rtle_student
and take note of how many warnings (displayed as errors) you see. You are
not required to fix any of the warnings, but will be reqruied to fix all the
warnings in the next project.
- Peer Review: Perform a peer review on your code. You only
need to review the code. However you'll also need the detailed design so
you can compare the code against the detailed design.
- Your groups are assigned on canvas. We strongly recommend you conduct the
peer review BEFORE unit test, even if it results in more bugs being recorded.
You're not being judged by number of bugs reported. (Except, if you have zero
bugs in peer review probably something went wrong with the peer review.)
However, it is not our intention to mess up your schedule by forbidding unit
test if your group has trouble scheduling a meeting time. Schedule the peer
review as early as you can, and do something reasonable managing peer review
vs. unit test timing.
- Use the peer review checlist here:
- Item #0 can be recorded as a defect during the review and that is OK since
we don't expect you to fix warnings in this project. Just put status as
"next project" if you have warnings. We do NOT want you to delay the
review to the last minute because of this item.
- Item #2 (follows style guidelines) is referring to reasonable conformance
to the project #3 checklist. If there
is a perceived conflict between that and the peer review checklist, the peer
review checklist should win.
- Item #23 (Does the code match the detailed design) means in this case does
the code match the statechart and any algorithm design in the detailed design
- As in previous reviews, assign a scribe and a leader for each review.
Allocate 30 minutes per review. The person whose artifacts are under review
should be the scribe. By the end of all reviews in your group, everyone should
have taken a turn being the leader.
- Remember the peer review guidelines: inspect the item, not the author;
don't get defensive; find but don't fix problems; limit review sessions to two
hours; keep a reasonable pace; and avoid "religious" debates on
- Fill out the usual peer review recording spreadsheet (or something
similar) for each review. Fix any defects you can before handing in the
checkpoint. Defer any major rework by writing "deferred" or similar
wording to the review issue log spreadsheet as appropriate.
- Read over the CUnit documentation:
CUnit Test Cases
tests and suites (this is good to know at a high level, but do not get
bogged down in the details -- the example code will have all that you need)
- Example code
- Build, run, and study the example system and its unit tests:
- The example code is in $ECE642RTLE_DIR/cUnit_example.
- Study dummy_turtle.cpp. This code implements the following
- Note that dummy_turtle.cpp calls two functions (
set_orientation() and will_bump()) that we expect would be
implemented in some file like student_maze. However, in order to unit
test dummy_turtle, we need a way to mock up these functions in our
unit testing framework. We do this by replacing the header file we use while
If testing is #define'd (we do this in the g++
command below), the file uses "student_mock.h". If the line were not
present, the file would use "student.h" and "ros/ros.h" as
- The mock functions are implemented in mock_functions.cpp. Study
how mock_orientation and mock_bump are used to mock up the
orientation output and will_bump input variable of the state chart.
- Take a look at how Transitions T1 and T2 are tested in
student_test.cpp (lines 11-17, 19-26). In particular, note that the
at_end input variable is passed as an input to moveTurtle.
Your own code might pass input variables as inputs to a function, or use
methods (such as bumped()) to fetch them, and the example shows both
ways of handling these cases.
- Note that test_t1 and test_t2 test the output (set by
the starting state) and the resultant state. Your tests shall do the same.
- Build the example:
g++ -Dtesting -o student_tests student_test.cpp dummy_turtle.cpp
The -Dtesting flag functions the same as #define testing
would in the file, and -lcunit tells the compiler to link the CUnit
- Run the example (./student_tests)and observe the output. It
should match the following:
- Change something (such as the output orientation in S1) in
dummy_turtle.cpp to make a test fail. Verify that the test fails by
building and running the example again.
- Use CUnit to test your own student_turtle state
- Include the same #ifdef testing blocks as in dummy_turtle.cpp
at the top of student_turtle.cpp. As in the example, implement
mocks of any student_maze functions you call.
- You can also use the #ifdef testing block to remove things such
as the static keyword so that you can instrument the code for testing
while keeping static properties when testing is not #define
'd. The idea is that you should be able to use the same
student_turtle.cpp in your regular ece642rtle build and your unit test
build without modifying the file between builds.
- Make sure your student_turtle state chart implementation is easy to
instrument for testing -- we recommend moving your main state chart logic to a
routine that takes the current state as one of the inputs and returns the next
state, like in dummy_turtle.cpp.
- You may need to make new getter/setter methods to mock up any data
structures you have (for example, instead of accessing a visit counts array
directly, you may need to create a method called visit_array_at(int,int)
). In some cases the smartest approach will be to modify your code to make
it easier to test rather than deal with an overly difficult mock up. That
tradeoff happens in industry projects too. If there is an aspect of your code
that is particularly difficult to mock up in this way, please talk to us in
- Set up the CUnit framework as in the example, and write unit tests for your
code. You are requested to provide a script to build and run your tests:
AndrewID_build_run_tests.sh. The architecture of the the testing files up to
you, but remember that files must start with your AndrewID.
- You should use the g++ flag -Dtesting to make testing
defined. (This compiler directive defines "testing" just as if a
#define had been place in a source code file.) In a Makefile, you can use
CPPFLAGS +=-Dtesting. You should not have #define testing
anywhere in your code and only enable it in the build commands, so that we can
grade your project seamlessly.
- Your unit tests shall have the following requirements:
- 100% transition coverage: Test every transition in the statechart.
Write tests according to your state chart diagram and requirements
documentation (not your code)-- in peer review next week, your peers will
verify that your tests correspond to your diagram. Writing tests based on
documentation and running them on your code is a way of verifying if your
implementation matches your documentation. Annotate code to statechart
traceability by including comments in your unit test code that map to
transitions (for example, "// Tests T1"). It
is fine for one test to map to more than one transition, but every transition
must be tested and annotated in the code.
- Up to 100% data coverage: If your transition tests do not cover
all combinations of input variables, write 8 additional tests (if fewer than 8
tests are needed to achieve 100% data coverage, write up to the number of tests
necessary to achieve that coverage) to test these combinations (for example, in
the dummy example, state==s1, at_end==false and will_bump==false is a possible
- 100% branch coverage: Write any additional tests to achieve 100%
branch coverage in your state chart handling code (including subroutines). This
means you have to figure out how to exercise any default: switch
statements. Note that this is simple "branch coverage" and not MCDC
coverage. We recommend, but do not require, 100% MCDC coverage.
- Build and run your tests.
- Spend at least one hour fixing any failing unit tests (it is OK to spend
less than one hour if all tests are fixed). Take note of any tests you did not
fix. You will have pass all your unit tests by the next Project.
- Re-compile the main project after writing and fixing any unit tests you
had time to fix. How many warnings did you get?
- The primary purpose of this step is to make sure you know what you have to
fix by the end of the next project. You will have to fix all warnings by the
- Create a build package. Create a set of command files that does the
following. (We understand that there are better ways to do build and test, such
as Jenkins. But we're sticking with shell scripts to avoid throwing a bunch of
additional learning curves at you during the last project.)
- Create a shell file that sets up environment, build, run all unit tests
with all warning flags enabled per previous steps in this project:
- Alternate shell variants are permitted so long as you handle invoking the
correct shell processor in your scripting.
- Create a tarball file (ANDREWID_P08.tar.gz or ANDREWID_P08.tgz) that has
absolutely everything needed to build and run your project from a clean,
freshly downloaded copy of the course virtualbox image. This is your
"build file." You'll be doing this for each remaining project.
- Note that all unit tests are not expected to pass, but all must be run by
the script. Similarly, there might be warnings generated.
- Answer the following questions in a writeup:
- How many warnings did you get in Step 1? Which one (if any) was the most
- Give precise commands on how to untar/unzip your build file and run the
script. Likely this will have at least three steps that should be cut &
paste command lines.
- cd to a subidirectory
- The exact tar command to execute including flags to extract your files
- ./ANDREWID_build.sh (if it is in a subdirectory, make sure to include a
sequence of commands that can be pasted in to make your script work.
- What to look for to indicate it worked
- A screen shot indicating that when you did the procedure in the previous
question it in fact worked.
- Make an argument that you achieved 100% transition coverage, 100% (or
approaching) data coverage, and 100% branch coverage. You do not have to
describe every obvious test case, but talk about how you handled any special
cases and your general strategy.
- Did you have any failing tests before you fixed things? Why did they fail?
For the ones you fixed, how was the experience of fixing them?
- List any tests that are failing at the time of turn-in. Briefly give a
plan for fixing them for the next project. The TA will compare these against
warnings generated with a TA compiler run.
- How many warnings did you get at the end of Step 5? Why did this number
increase/decrease/stay the same?
- Include your statechart, including any updates you made for this project.
It should match the transition tests you wrote. Your statechart should reflect
the code you submit. We strongly suggest you update other parts of your design,
but you do not have to turn them in.
- Include the peer review spreadsheet for the peer review done on YOUR code.
(Do not submit peer reviews you participated in for other student code; each
student submits only one peer review sheet.)
- Did you do peer review on your code before/after/during unit test? Any
lessons learned or thoughts on how that turned out?
- Do you have any feedback about this project? Include your name at start of
writeup. (If they are printed out, the file name info is lost.)
- Note: The next Project (a one-week project) will have you fix all the
warnings and failing unit tests you encountered in this Project, and will also
have you implement an invariant. Since there is some slack built in to this
project that carries over to the next Project, plan your time accordingly.
Hand in the following:
- Your build file.
- Your writeup, called
p08_writeup_[AndrewID]_[FamilyName]_[GivenName].pdf the elements of the
writeup should be integrated into a single .pdf file. Please don't make the TAs
sift through a directory full of separate image and text files.
Zip the two files and submit them as Project08_[Andrew
ID]_[Family name]_[First name].zip.
The rubric for the project is found here.
- There are no points for mazes this week, but there will be next week.
We strongly suggest you see how your code is doing on mazes 1 through 6
so you can plan effort for the next project.
- If it you find it really difficult to write unit tests, the problem isn't
the unit test framework; it's your code. Remember MCC/SCC? This is where high
complexity comes back to bite you.
- Minimize use of static variables
- Keep reasonable complexity to make unit testing viable
- Provide interfaces designed to make testing easier
- Using globals instead of file static to make testing easier is A REALLY BAD
IDEA (see next point)
- Nonetheless, you'll find out that unit testing with file static variables
can be painful because you need to get the variables into a certain value
before running a test. This is a periennial problem with unit test in industry,
so you get a taste in this project. For example, what if you need to test
different turtle orientations in different unit tests; how to you get the
turtle in that orientation before you run the unit test? Here are some general
techniques you can use to set file static variable state for unit testing. Some
are good ideas. Some aren't. The first approaches are the least scary. The last
approaches are the ones I most often see in industry. You can do whichever you
like, but don't come crying to us if a hack blows up in your face.
- Use test code to put the unit in the correct state via a sequence of calls.
- Example: issue a bunch of move commands to the turtle, then run the unit
- Pro: minimally intrusive
- Con: makes unit test design more difficult; can require lots of code other
than unit to execute; unit tests can break if system design changes.
- Add public get and set functions for the file static variables
- Pro: better than making them global
- Con: exposes internal state you don't want any other code seeing; breaks
encapsulation; involves writing extra code only used for testing
- Variant: conditionally compile so the set/get functions only needed for
unit test are only compiled when compiling for unit tests. This fixes the
encapsulation issues for production code. This means surrounding these
definitions with #ifdef TESTING .... #endif. (You'll need to put #define
TESTING in your compiler command line, such as -D TESTING.)
- #include the .c file in the test file
- Hack. Often causes complications on large projects. If you don't understand
how/what/why on your own you're not allowed to do this (don't ask for help on
how to do this).
- #ifdef TESTING
#define static static
- Ugly, Hideous Hack. Therefore the one I see most often in industry. But
it's still ugly. If you don't understand how/what/why on your own you're not
allowed to do this (don't ask for help on how to do this).
(If you have hints of your own please e-mail them to the course staff and,
if appropriate, we'll add to this list.)
- 10/13/2018: Released. Note addition of peer review and updated writeup
since last semester version.
- 10/19/2019: Added requirement for build file; added hints on file static