feedback.py - Provide feedback for student answers¶
Imports¶
These are listed in the order prescribed by PEP 8.
Standard library¶
Third-party imports¶
Local imports¶
None.
Return (True, feedback)
if feedback should be computed on the server,
instead of the client. This function assumes that the inputs have already been validated.
Get the information about this question. Per the web2py docs,
an assignment in models/db.py
makes current.db
available.
check for query_results
If feedback is present, decode it.
If feedback isn’t present, use client-side grading.
Provide feedback for a fill-in-the-blank problem. This should produce
identical results to the code in evaluateAnswers
in fitb.js
.
Grade based on this feedback. The new format is JSON; the old is comma-separated.
Some answers may parse as JSON, but still be in the old format. The new format should always return an array.
The overall correctness of the entire problem.
The correctness of this problem depends on if the first item matches.
Check everything but the last answer, which always matches.
for fb in feedback_for_blank[:-1]:
if "regex" in fb:
if re.search(
fb["regex"], blank, re.I if fb["regexFlags"] == "i" else 0
):
isCorrectArray.append(is_first_item)
if not is_first_item:
correct = False
displayFeed.append(fb["feedback"])
break
else:
assert "number" in fb
min_, max_ = fb["number"]
try:
Note that literal_eval
does not discard leading / trailing spaces, but considers them indentation errors. So, explicitly invoke strip
.
In case something weird or invalid was parsed (dict, etc.)
Nothing matched. Use the last feedback.
Note that this isn’t a percentage, but a ratio where 1.0 == all correct.
Return grading results to the client for a non-test scenario.
if current.settings.is_testing:
res = dict(
correct=True,
displayFeed=["Response recorded."] * len(answer),
isCorrectArray=[True] * len(answer),
percent=1,
)
else:
res = dict(
correct=correct,
displayFeed=displayFeed,
isCorrectArray=isCorrectArray,
percent=percent,
)
return "T" if correct else "F", res