Weekly Checkpoint [17 pts + 5 possible bonus pts]

Measuring process, productivity, and satisfaction

We need to touch base with you each week, whether in person or not. We use this information to stay on top of all the teams (as much as possible).

I understand that some weeks may be better or worse than others, and that measuring your work every week and expecting you to perform at the same level each week can be stressful. Hence, I've introduced several places where bonus marks are possible so if you do poorly in a given week, you have the opportunity to catch up in a future week. Likewise, if you know you've got things coming up, you can also work ahead to get more points and take the hit later. I hope this gives everyone some more flexibility and control over their own schedules.
There are several aspects that we measure you by each week. To do this, we rely on your submission of information and when possible, we will also meet with you during class to get more information. Each rubric below will explain how you are graded on each of these aspects.

Evaluation Rubric

Total marks possible: 17 points (+ 5 possible bonus points).

Criteria Exceeds Expectations (Bonus)
Meets Expectations
Below Expectations
No Submission or Not Present
Team Log (Team Mark)
Does the submitted log recap the work achieved relative to this milestone's goals, provide required charts, tables, and test reports in a clear format?
N/A [2 pts] All the required information is presented clearly. Relevant dates in working period is specified. Github usernames are matched to actual student names. Milestone goals are listed for reference. All the charts and tables are provided. A test report is provided. Context is explained where needed. Team reflection on the progress made and learning needs may be provided. [1 pt] Only some of the required information is provided, or all the information is provided but in an unclear format and/or lack explanation that could have otherwise clearly identified the progress made. [0 pt] Most of the required information is missing or the log was not submitted.
Individual Log (Individual Mark)
Does the submitted log recap this period's goals, provide the peer eval answer for tasks worked on, and present the information in a clear format?
N/A [2 pts] All the required information is presented clearly. Relevant dates in working period is specified. The working week's goals for the individual were clearly identified for reference. A screenshot of the tasks worked on (from the peer eval answer) is provided. Context is explained where needed. Self-reflection on the progress made and learning needs may be provided. [1 pt] Only some of the required information is provided, or all the information is provided but in an unclear format and/or lack explanation that could have otherwise clearly identified the progress made. [0 pt] Most of the required information is missing or the log was not submitted.
Peer Evaluations: Quality (Individual Mark)
What is the quality of the completed peer evaluations?
N/A [2 pts] All the questions in the peer evaluation were completed. The answers paint a consistent picture of the team dynamics from this individual's perspective. [1 pt] Most of the questions in the peer evaluation were completed and their answers are consistent. Alternatively, all the questions were completed but there was some inconsistency in the answers. [0 pt] Most or all of the questions in the peer evaluation were missing, and/or the answers paint an inconsistent picture of the team dynamics with not context to provide further understanding.
In-class Checkin (Individual Mark)
Is the student well-prepared? Was the live demo successful? Did the student behave in a professional manner? (This counts as part of your individual in-class participation.)
N/A [2 pt] The student was punctual, well-prepared, and attentive during the checkin. A live demo was readily available to show what the student did in the working period, including the tests that passed/failed. When asked, the student answered the questions informatively. The information was explained in a concise manner. All the interaction was professional and respectful. [1 pt] The student was late or was sometimes not paying attention to the discussion during the team's checkin. A live demo was shown but time was wasted setting up and getting it ready. The student was only able to answer some of the questions in a meaningful way. All the interaction was professional and respectful. [0 pt] The student was unable to show the work completed in the past working period, the student was not following or understanding what the team is doing or where the team is at, the student was disrespectful, or the student was not present.
Repository Measures: Code/Test Contributions (Individual Mark)
Are the PRs small with clear commit messages? How many commits were made and were they appropriately sized? Are proper tests provided with reasonable coverage? Does the code follow repo conventions and standards established by the team? How many features/tasks were completed during this period?
[6 pts] The student completed twice (or more than) the expected features during the past working period. All the features were properly tested and merged. The associated PRs were a reasonable size, with clear commit messages, and follow coding conventions consistently. [3 pts] The student completed the expected features during the past working period. All the features were properly tested and merged. The associated PRs were a reasonable size, with clear commit messages, and follow coding conventions consistently. [1 pt] The student completed half the expected features during the past working period, or all the features but with out proper tests or merging them. The quality of the PRs still need much improvement. [0 pt] Minimal or no code/test contributions were made during the past working period.
Repository Measures: Code Review Quality (Individual Mark)
Which of the following were noted in the code review? Functionality, tests, complexity and design, naming, comments, documentation.
[5 pts] Beyond functionality and test coverage, the code review checks for design/complexity, proper naming conventions, having sufficient comments for context (but not overly so), and updating repo documentation if the functionality impacts it. [3 pts] The code review checks for functionality correctness and test coverage. These aspects are commented on specifically. If problems exist, the review identifies questions that need to be addressed by the coder or directs the coder to the problem precisely. Reviews are provided in an informative and constructive manner. [1 pt] The code reviews lack depth (either commented on functionality or test coverage but not both) or comments lack direction to help the coder understand the problem. The reviewer did not complete enough code reviews in that working period. [0 pt] The code reviews are superficial (e.g. "LGTM") or it is not clear whether the reviewer read the code carefully. The code reviews have rude and condescending comments. Alternatively, no code reviews were made.
Repository Measures: Collaboration Process (Team Mark)
Does the interaction reveal the team is following the pre-established process? We will check the Github network graph, feature naming convention used, make sure there are two reviewers per PR before merging, and all review comments are addressed before merging.
N/A [3 pts] All the aspects required in the description are clearly followed. The network graph shows everyone is collaboarting effectively. [1 pt] Merged features are deleted and we cannot easily determine the collaboration process. Other aspects required in the description are clearly followed. [0 pt] Merged features are deleted and/or we cannot easily determine the collaboration process. Other aspects required in the description are not followed or only some of them are followed.