config.yaml Configurations

In this section, we will detail all the configuration options usable in setting up an exercise repository writting config.yaml.

Required Configuration

Although there are many other options, the following snippet contains the required configuration options needed to run Aurum LMS. You save it as config.yaml.

For full examples, you can read Configuration Examples section at the end of the page.

student_file: <filename>
run:
    student_soln_exe: <filename>
    test_runner_file: runner.py

nose_report_file: nosetests.xml
run_student_file: True
run_nosetests: True

full_scores:
    test_<function_case_one_name>:
        full_score: <int>
        description: <string>

< > are placeholders. For example, for student_file you can write

student_file: student_soln.py

Required options

student_file

This is the file name Aurum LMS saves the student code to. Examples: main.py, main.c, student_soln.py, student_soln.c are common file names that you can use.

run

Specify the run procedure parameters.

student_soln_exe
The name of student’s executable. For compiled language, this is the name of the exectuable which is produced by running Makefile. See Configuration Examples. For Python, this is often the same value as student_file.
test_runner_file
The name of the script running nosetests. This should always be runner.py.
nose_report_file
The name of the nosetests report. This should always be``nosetests.xml``.
run_student_file
Defaults to True. If you do not need to run student’s solution because you will import the function they are writing in your test case, you can set it to False.
run_nosetests
Defaults to True and should remain as True.
full_score

Specify the scoring metrics.

test_<function_case_N_name>

Each unittest testcase (individual unittest function) should be named in this configuration file. If the test case is called def test_hello_world_is_seen, then instructor should put test_hello_world_is_seen into the configuration file.

For each test, please give an integer for full_score to indicate the scale. If there are four cases to test, and each case worth 1 point, a student scores 3 out of 4 (passed 3 tests) will receive 75% on the assignment.

For each test, please give a brief one-line description on what the test does.

Build Configuration

We currently support make build system. If you need to compile your C or C++ source code, you should provide this besides the configuration above.

build:
    build_system: <name of the make system>
    build_file: <name of the file to be built>

run:
    student_soln_exe: <executable name>
    test_runner_file: runner.py

nose_report_file: nosetests.xml
run_student_file: True
run_nosetests: True

full_scores:
    test_<function_case_one_name>:
        full_score: <int>
        description: <string>
build

Specify the build procdeure in Aurum for solutions that required compilation.

build_system
Only make is supported. So please fill in make.
build_file
The main file to be build. Usually this is the name you use for student_file such as main.cpp or student_soln.cpp.
run

Specify the run procedure in accordance to the build procedure.

student_soln_exe

The name of the exectuable produced by running Makefile. This is usually the output binary in Makefile (e.g a.out)

test_runner_file

Default to runner.py. See Required options.

Output Artifacts Configuration

By that, we mean images or files generated when running a script.

show_outfiles: ["<file_extension1>", "<file_extension2>", ...]
show_outfiles

Specify what kind of additional results have to be displayed to students when test result is returned to BlackBoard.

We support image outputs. An example of this usage would be a set of PNG plots and images generated after testing their solutions. In that case, we can write show_outfiles: ["*.png"] to indicate we want to display a set of files ended with .png.

Configuration Examples

Example for C

student_file: student.c
build:
    build_system: make
    build_file: student.c
run:
    student_soln_exe: student
    test_runner_file: runner.py

nose_report_file: nosetests.xml
run_student_file: True
run_nosetests: True

full_scores:
    test_binary_exists:
        full_score: 1
        description: Does the student binary exists?
    test_binary_same:
        full_score: 1
        description: Are the binary produced by student and instructor equal?

Example for Python

 student_file: student.py
 run:
     student_soln_exe: student.py
     test_runner_file: runner.py

 nose_report_file: nosetests.xml
 run_student_file: True

full_scores:
    test_it_runs:
        full_score: 1
        description: Does it run without crashing?
    test_image_found:
        full_score: 1
        description: Did you save out an image?
    test_image_matches:
        full_score: 2
        description: Is the image the correct image with colorbar?