Giters
onnx
/
turnkeyml
The AI insights toolchain
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
47
Watchers:
12
Issues:
84
Forks:
10
onnx/turnkeyml Issues
Handle inconsistent opset placement in ONNX models
Updated
2 months ago
Unable to differentiate OS kills from timeouts
Updated
2 months ago
WatchdogTimer doesn't kill child processes on timeout
Closed
2 months ago
Benchmark Status Shows Error when a previous Build Stage failed.
Closed
2 months ago
Comments count
4
`stage_status = killed` has no `error_log` message
Closed
3 months ago
Comments count
1
Enhance conda creation methodology for onnxrt
Updated
3 months ago
Proposal: check model path in `--model_path` scripts
Updated
3 months ago
Input size is only saved to stats in ONNX tool flows
Updated
3 months ago
turnkey hangs on querying OEM system information
Closed
3 months ago
Validate `benchmark_files()` arguments
Updated
2 months ago
Integrate mass-benchmarking into the Files API
Updated
4 months ago
Comments count
3
Change `--rebuild` to `--skip`
Updated
4 months ago
Comments count
5
Configurable verbosity in Analyzer Status
Closed
3 months ago
Comments count
1
Create an ORT base class and refactor run/onnxrt on to it
Updated
4 months ago
Use full-sized GPT-J
Closed
4 months ago
Add error messages to report.csv
Closed
4 months ago
Provide an option to use a fully-standardized conda environment
Closed
4 months ago
Comments count
2
Turnkey stats evaluation ID collisions
Closed
4 months ago
Comments count
1
Fix CI and enable TKML to work with torch >= 2.2.0
Closed
4 months ago
Stage durations are not being correctly recorded
Closed
4 months ago
Issue with converting to ONNX format for LLM models
Closed
4 months ago
Comments count
3
Proposal: A way to skip LLMs
Updated
4 months ago
Proposal: Store tkml CLI command and timestamp as part of stats and report
Closed
4 months ago
Add support for pipelines of models
Updated
4 months ago
Comments count
1
Reporting places results for the same model on multiple CSV lines
Closed
4 months ago
Comments count
1
torch-compiled runtime does not work for any of our models
Closed
4 months ago
Comments count
2
Failed benchmarks that do not require a build report build status incorrectly
Closed
4 months ago
torch-compiled runtime implementation results in confusing status stats
Closed
3 months ago
Multi-cache reporting has a quadratic number of rows in the CSV
Closed
5 months ago
Benchmarked device name is only recorded at the end of benchmarking
Closed
3 months ago
Turnkey Stats records eval-specific stats at the model-level
Updated
5 months ago
Comments count
2
Print a shorter CLI message on argparse errors
Closed
5 months ago
Reorganize the `turnkey benchmark` help page
Closed
6 months ago
Confusing error message on incorrect command
Closed
5 months ago
Comments count
2
Support for PyTorch Lightning
Updated
6 months ago
Bug? / Wrong error message? "turnkey build bert.py"
Closed
6 months ago
Comments count
3
Make sure that every public API has a nice docstring
Updated
6 months ago
Comments count
1
Create unit tests for each public API
Updated
6 months ago
Fix flakey test: `timeout.py`
Updated
6 months ago
Release notes markdown file
Closed
6 months ago
Public API contract
Closed
6 months ago
Comments count
3
Features Request: Signing Models with sigstore?
Updated
3 months ago
Comments count
2
Proposal: Release process for Version 1.0
Closed
6 months ago
Comments count
1
Eliminate the `benchmark_model()` API
Closed
6 months ago
Comments count
1
Proposal: Rename the `benchmark` command to `evaluate`
Closed
6 months ago
Comments count
2
ONNX model size on disk is not captured for models >2GB in size
Updated
6 months ago
Help users who forgot to install the models requirements
Updated
6 months ago
`benchmarking_status` stat is ambiguous
Closed
6 months ago
Update issue references in the code
Closed
6 months ago
Add plugin API tutorials
Updated
6 months ago
Previous
Next