ARM64 and RISC-V (extensible) Assessment System.
areas is originally a fork from João Damas' Automatic Observation and (grade) Calculation for (subroutine) Operations tool. It is a tool to automate student's grading in the assignments done during the Microprocessor and Personal Computers course unit.
To ease the communication between the backend server and the tool the output demanded changes. Output .txt
and .csv
files are now combined in a more complete .json
file. Structure of the .zip
input file is simplified. Unsupported data types such as long and double are now supported. A new input parameter - weight - is introduced.
Using Docker:
docker pull luist188/areas
To develop the tool you must setup a Docker development environment to ease the dependencies installation and setup an isolated environment.
-
Build the Docker development image:
docker build -f Dockerfile.dev -t areas .
-
Run the image with the shared folder:
docker run -it -v $(pwd):/usr/app areas
Note: if you are running MacOS with the M1 (or superior) chip you must add --platform linux/x86_64
to docker build
and docker run
.
- Place the input files inside any directory.
- Run the image with a shared volume pointing to the input directory:
docker run -v input:destination -it luist188/areas
(you can learn more aboutdocker run
usage here) - Run the alias command (assure you are using
/bin/bash
)areas
or runpython main.py
in the tool's source.
$ areas [-h] -sr SR -t T -sm SM [SM ...] [-gfd GFD] [-ffd FFD] [-grf GRF] [-tout TOUT] [-fpre FPRE]
$ areas [args]
Options:
--help, -h Show help [boolean]
-sr <subroutines.yaml> .yaml file containing subroutine declaration [required] [string]
-t <tests.yaml> .yaml file containing the test cases [required] [string]
-sm <submission.zip...> .zip files containing user submission [required] [string array]
-gfd <directory> path to the directory to store temporary files
(e.g., compiled binaries) [default:grading] [string]
-ffd <directory> path to the directory to store the grading for
each submission [default:feedback] [string]
-tout <timeout> float timeout value [default:2.0] [float]
-fpre <precision> floating point threshold for comparing floating
points in test cases [default:1e-6] [float]
int
long
float
double
char
chari
(char represented as an unsgined intenger - similar to char but has to be used when printed characters are not ASCII characters)
char*/string
array int
array long
array float
array double
array char
array chari
The input file for the subroutine declaration has to follow a specific structure and syntax described as follows:
foo:
params:
- int
- array char
- array int
- array int
return:
- int
- array int
bar:
params:
- long
return:
- long
Each subroutine has an optional parameter to define the subroutine architecture, the syntax is as follows:
foo:
architecture: arm
params:
- int
- array char
- array int
- array int
return:
- int
- array int
By default, if the architecture parameter is omitted, the system will assume ARM64 as the subroutine architecture. The available architectures are the following:
arm
- ARM64 architectureriscv
- RISC-V architecture
The subroutine name has to match the .s
to test and is case insensitive. Thus, the subroutine foo
or bar
is going to check any .s
file that matches its name case insensitive. All subroutines must contain an array of parameters, params
, and an array of returns, return
.
The input file for the test cases declaration has to follow a specific structure and syntax described as follows:
bar:
- inputs:
- 6
outputs:
- 36
weight: 0.5
- inputs:
- 5
outputs:
- 25
weight: 0.5
The root declaration of a test case must match the name declared in the subroutines.yaml
file. Test cases have an array of inputs that has a list of outputs and a test weight. The sum of the test weights must be 1.0.
The submission zip
file must contain a .s
file in its root. For example, for the subroutine foo
and bar
the zip
structure should be as follows:
submission.zip
├── foo.s
└── bar.s
For each submission file a .json
file is created in the feedback directory with the same name of the .zip
file. The file contains all information about compilation status and test cases. In addition, a simplified version of the result of all submissions is created in a result.json
. The content of the files look as follows:
File submission.json
[
{
"name": "foo",
"compiled": true,
"ok": true,
"passed_count": 2,
"test_count": 2,
"score": 1,
"tests": [
{
"weight": 1,
"run": true,
"input": [
6,
["-", "+", "+", "-", "-", "+"],
[1, 2, 3, 0, 1, -25],
[13, 2, 8, 4, 5, 25]
],
"output": [
"0",
["12", "4", "11", "4", "4", "0"]
],
"passed": true
}
]
},
{
"name": "bar",
"compiled": true,
"ok": true,
"passed_count": 2,
"test_count": 2,
"score": 1,
"tests": [
{
"weight": 0.5,
"run": true,
"input": [
6
],
"output": [
"36"
],
"passed": true
},
{
"weight": 0.5,
"run": true,
"input": [
5
],
"output": [
"25"
],
"passed": true
}
]
}
]
File result.json
[
{
"submission_name": "submission",
"subroutines": [
{
"name": "foo",
"score": 0
},
{
"name": "bar",
"score": 0.5
}
]
},
{
"submission_name": "submission2",
"subroutines": [
{
"name": "foo",
"score": 1
},
{
"name": "bar",
"score": 1
}
]
}
]