How to create an interactive Python lab with Pytest?
Attach to course item
button. Once that is done, click the three dots again and click on “Edit” to edit the lab.Edit
button a new page will open. On this page you need to setup instructions for lab. These instructions would be visible to the user when they’re attempting the lab. Therefore, include all the helper material, lab setup instructions here.
.cdmrc
file in the repository given to you above. It is highly recommend, at this point, that you go through the .cdmrc guide and how to use .cdmrc in playgrounds to understand what .cdmrc
file exactly is. Once you understand how to work with .cdmrc
come back to this area.
set -e 1
we effectively say that the script should stop on any errors.labtests
folder inside of the /home/damner/code
user code directory. Note that .labtests
is a special folder that can be used to place your test code. This folder will not be visible in the file explorer user sees, and the files placed in this folder are not “backed up to cloud” for user./home/damner/code/.labtests/pytest.py
./home/damner/code/.labtests/processPythonResults.js
. This is because we need to parse the results outputted by the Python testing utility to reflect it on the playgrounds. You may as well create this file in python (reading the JSON report and outputting a boolean array in a file stored in env $UNIT_TEST_OUTPUT_FILE
)$UNIT_TEST_OUTPUT_FILE
[true,false,true,true]
inside $UNIT_TEST_OUTPUT_FILE
, it would reflect as PASS, FAIL, PASS, PASS for 4 challenges available inside playground UI (as shown below)python3 -m pytest
command, specifying the output as JSON (read by processPythonResults.js
) and in a single thread (as we want ordered results).
processPythonResults.js
file that writes the correct JSON boolean array on $UNIT_TEST_OUTPUT_FILE
which is then read by the playground UI and marks the lab challenges as pass or fail.
Evaluation
tab, you’ll see another section called “Custom test file”. You can use this test file to add custom code for testing user work.
Test command to run section
inside the evaluation tab we were in earlier.
The point of having a file like this to provide you with a place where you can write your evaluation script.
Note: For Python labs, I’m assuming you will be using the pre-installed pytest utility to test the user code. Hence, we use the pytest testing format here:
USER_CODE_DIR
to the PATH
env. USER_CODE_DIR
string is effectively /home/damner/code
which in turn is the directory where your students store their coding files (and is also visible in the playground file explorer by default)
script.py
using importlib.import_module('script')
. This is to prevent the test file from crashing other tests in case the script is not there at all.
def test_(...)
functions inside your test file suite should match the number of challenges added in the creator area.
test
blocks are less than challenges added back in the UI, the “extra” UI challenges would automatically stay as “false”. If you add more challenges in test file, the results would be ignored.