feedback.py - Provide feedback for student answers¶
Imports¶
These are listed in the order prescribed by PEP 8.
Standard library¶
Third-party imports¶
Local imports¶
Return (True, feedback)
if feedback should be computed on the server,
instead of the client. This function assumes that the inputs have already been validated.
Get the information about this question. Per the web2py docs,
an assignment in models/db.py
makes current.db
available.
check for query_results
If feedback is present, decode it.
If feedback isn’t present, use client-side grading.
Provide feedback for a fill-in-the-blank problem. This should produce
identical results to the code in evaluateAnswers
in fitb.js
.
Grade based on this feedback. The new format is JSON; the old is comma-separated.
Some answers may parse as JSON, but still be in the old format. The new format should always return an array.
The overall correctness of the entire problem.
The correctness of this problem depends on if the first item matches.
Check everything but the last answer, which always matches.
for fb in feedback_for_blank[:-1]:
if "regex" in fb:
if re.search(
fb["regex"], blank, re.I if fb["regexFlags"] == "i" else 0
):
isCorrectArray.append(is_first_item)
if not is_first_item:
correct = False
displayFeed.append(fb["feedback"])
break
else:
assert "number" in fb
min_, max_ = fb["number"]
try:
Note that literal_eval
does not discard leading / trailing spaces, but considers them indentation errors. So, explicitly invoke strip
.
In case something weird or invalid was parsed (dict, etc.)
Nothing matched. Use the last feedback.
Note that this isn’t a percentage, but a ratio where 1.0 == all correct.
Return grading results to the client for a non-test scenario.
if current.settings.is_testing:
res = dict(
correct=True,
displayFeed=["Response recorded."] * len(answer),
isCorrectArray=[True] * len(answer),
percent=1,
)
else:
res = dict(
correct=correct,
displayFeed=displayFeed,
isCorrectArray=isCorrectArray,
percent=percent,
)
return "T" if correct else "F", res
lp feedback¶
def lp_feedback(code_snippets, feedback_struct):
db = current.db
base_course = (
db((db.courses.id == current.auth.user.course_id))
.select(db.courses.base_course)
.first()
.base_course
)
sphinx_base_path = os.path.join(current.request.folder, "books", base_course)
source_path = feedback_struct["source_path"]
Read the Sphinx config file to find paths relative to this directory.
Next, read the student source in for the program the student is working on.
Find the path to the student source file.
abs_source_path = os.path.normpath(
os.path.join(
sphinx_base_path, sphinx_out_path, STUDENT_SOURCE_PATH, source_path
)
)
with open(abs_source_path, encoding="utf-8") as f:
source_str = f.read()
except Exception as e:
return {
"errors": ["Cannot open source file {}: {}.".format(abs_source_path, e)]
}
- Create a snippet-replaced version of the source, by looking for “put code
here” comments and replacing them with the provided code. To do so,
first split out the “put code here” comments.
Sanity check! Source with n “put code here” comments splits into n+1 items, into which the n student code snippets should be interleaved.
Interleave these with the student snippets.
Join them into a single string. Make sure newlines separate everything.
Create a temporary directory, then write the source there.
with tempfile.TemporaryDirectory() as temp_path:
temp_source_path = os.path.join(temp_path, os.path.basename(source_path))
with open(temp_source_path, "w", encoding="utf-8") as f:
f.write(source_str)
try:
res = _scheduled_builder.delay(
feedback_struct["builder"],
temp_source_path,
sphinx_base_path,
sphinx_source_path,
sphinx_out_path,
source_path,
)
output, is_correct = res.get(timeout=60)
except Exception as e:
return {"errors": ["Error in build task: {}".format(e)]}
else:
return {
The answer.
Strip whitespace and return only the last 4K or data or so. There’s no need for more – it’s probably just a crashed or confused program spewing output, so don’t waste bandwidth or storage space on it.
This function should take a list of code snippets and modify them to prepare for the platform-specific compile. For example, add a line number directive to the beginning of each.
The builder which will be used to build these snippets.
A list of code snippets submitted by the user.
The name of the source file into which these snippets will be inserted.
Prepend a line number directive to each snippet. I can’t get this to work in the assembler. I tried:
From Section 4.11 (Misc directives):
.appline 1
.ln 1
(produces the messageError: unknown pseudo-op: `.ln'
. But if I use the assembly option-a
, the listing file show that this directive inserts line 1 of the source .s file into the listing file. ???.loc 1 1
(trying.loc 1, 1
producesError: rest of line ignored; first ignored character is `,'
)
From Section 4.12 (directives for debug information):
.line 1
. I also tried this inside a.def/.endef
pair, which just produced error messages.
Perhaps saving each snippet to a file, then including them via
.include
would help. Ugh.
Select what to prepend based on the language.
Python doesn’t (easily) support setting line numbers.
This is an unsupported language. It would be nice to report this as an error instead of raising an exception.