[FEATURE]: Integrate benchmarking capabilities
evan-palmer opened this issue · comments
Is your feature request related to a problem? Please describe
There is currently minimal support for evaluating the performance of the system. This makes it difficult to evaluate the acknowledgement success rate, execution times, etc.
Describe the solution you'd like
Implement a collection of benchmarks to enable users to evaluate the performance of the system and their code. This will further support new developers as they implement their own algorithms and extend from pymavswarm
Describe alternatives you've considered
Alternative such as pyperformance
exist; however, it would be helpful to have statistics related specifically to the pymavswarm
system
Implementation Ideas
Implement some/all of the following benchmarks:
- Acknowledgement success rate
- State verification success rate
- Average ping
- Method execution time
- Memory usage
- Average number of retries attempted
It may also be helpful to add support for generating visualizations from the data and a decorator to enable users to specify which methods should be benchmarked.
Additional context
N/A