chen0031 / Bench21

国际测试基准与标准大会(Bench'21)征稿!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

会议概要

国际测试基准与标准大会(Bench'21)征稿!

国际测试基准与标准大会(Bench)是唯一一个专注于基准测试与标准的国际性会议,大会指导委员会由美国工程院院士Jack Dongarra以及多位国际著名教授组成。大会举办时间是11月14-16

  • 采用和BenchCouncil Transactions(TBench)混合出版的模式。录用过后立即见刊。作者也可以选择在大会Proceeding上出版(通常由Springer出版,EI检索)。
  • 三次论文提交机会(春、夏和冬)。最近一次提交机会的Deadline为5月21日。双盲评审。采用最严格的评审规则保证公平性和有效性。
  • 颁发Benchmark领域国际权威性年度大奖:BenchCouncil成就奖(3000美元,20年获奖者为IEEE Fellow、明尼苏达大学David Lilja教授。19年获奖者为英国皇家工程院院士和前微软全球副总裁Tony Hey)和BenchCouncil新星奖(1000美元,20年新设,20年获奖者为国际著名的超级计算专家、苏黎世联邦理工大学 Torsten Hoefler 教授);
  • 今年增加了国际性的最佳博士论文奖的评审。由国际该领域著名教授组成评委会阵容。
  • 颁发Bench'21最佳论文奖(1000美元)和可复现研究奖(100美元,最多设12个奖)。
  • 鼓励多学科交流。
  • 由邀请报告和论文报告组成。

重要日期

每年有三次提交论文的机会。 Spring submission website: https://bench2021.hotcrp.com/ Abstract registration: May 15, 2021 Paper submission: May 21, 2021 First-round author notification: June 21, 2021 Rebuttal and Revision Period: June 21-July 21, 2021 Second-round author notification: August 10

Summer submission website: TBD Abstract registration: August 1, 2021 Paper submission: August 7, 2021 First-round author notification: September 7, 2021 Rebuttal and Revision Period: September 7-October 7, 2021 Second-round author notification: November 7, 2021

Winter submission website: TBD Abstract registration: December 15, 2021 Paper submission: December 21, 2021 First-round author notification: January 21, 2022 Rebuttal and Revision Period: January 21-February 21, 2022 Second-round author notification: March 21, 2022

奖项

BenchCouncil成就奖:3000美元

---该奖项奖励对于测试基准与标准、评测和优化方面具有长期贡献的资深研究者,获奖者有资格成为BenchCouncil Fellow。

BenchCouncil新星奖:1000美元

---该奖项奖励对于测试基准与标准、评测和优化方面具有潜在突出贡献的研究者。

BenchCouncil年度博士论文奖:1000美元

---该奖项授予近两年内获得博士学位者。该博士论文对于测试基准与标准、评测和优化方面具有潜在的贡献。委员会只评定博士论文以及博士论文开发的工具。

BenchCouncil最佳论文奖:1000美元

---该奖项奖励Bench会议的最佳论文,依据论文对于基准评测和优化方面的潜在影响力进行判定。

BenchCouncil可复现研究奖:每篇100美元,最多12篇

---奖励使用BenchCouncil或者其他组织发布的Benchmark产生的可复现的优秀研究结果。

论文征稿

主题包括但不限于以下内容。 ** Benchmark and standard specifications, implementations, and validations of: Big Data Artificial intelligence (AI) High-performance computing (HPC) Machine learning Big scientific data Datacenters Cloud Warehouse-scale computing Mobile robotics Edge and fog computing Internet of Things (IoT) Blockchain Data management and storage Financial domains Education domains Medical domains Other application domains

** Data: Detailed descriptions of research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements. Analyses or meta-analyses of existing data and original articles on systems, technologies, and techniques that advance data sharing and reuse support reproducible research. Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the data's descriptions. Tools generating large-scale data while preserving their original characteristics.

** Workload characterization, quantitative measurement, design, and evaluation studies of: Computer and communication networks, protocols, and algorithms Wireless, mobile, ad-hoc and sensor networks, IoT applications Computer architectures, hardware accelerators, multi-core processors, memory systems, and storage networks HPC Operating systems, file systems, and databases Virtualization, data centers, distributed and cloud computing, fog and edge computing Mobile and personal computing systems Energy-efficient computing systems Real-time and fault-tolerant systems Security and privacy of computing and networked systems Software systems and services, and enterprise applications Social networks, multimedia systems, web services Cyber-physical systems, including the smart grid

** Methodologies, abstractions, metrics, algorithms, and tools for: Analytical modeling techniques and model validation Workload characterization and benchmarking Performance, scalability, power, and reliability analysis Sustainability analysis and power management System measurement, performance monitoring and forecasting Anomaly detection, problem diagnosis, and troubleshooting Capacity planning, resource allocation, run time management and scheduling Experimental design, statistical analysis and simulation

** Measurement and evaluation: Evaluation methodologies and metrics Testbed methodologies and systems Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems Collection and analysis of measurement data that yield new insights Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks) Methods and tools to monitor and visualize measurement and evaluation data Systems and algorithms that build on measurement-based findings Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing) Reappraisal of previous empirical measurements and measurement-based conclusions Descriptions of challenges and future directions that the measurement and evaluation community should pursue ** Optimization methodologies and Tools.

论文提交

长文:限12页,TBench双栏格式 短文:限8页,TBench双栏格式

每年有三次提交的机会,双盲评审(采用HotCRP投稿系统)。录用论文可以选择直接在BenchCouncil Transactions on Benchmarks, Standards, and Evaluations(TBench)期刊上出版,或者在国际刊物(EI检索)上出版。在所有接受和合格的论文中,评审团将评选出BenchCouncil最佳论文奖和BenchCouncil可复现研究优秀奖。

论文必须以PDF格式提交。论文页数限制规定为:一篇完整的论文,TBench双栏格式不超过12页,一篇短篇论文,TBench双栏格式不超过8页。无论长文短文,页数限制并不包括参考文献和作者传记。论文审稿强调研究的价值,而非论文页数。

论文经录用后,至少一名作者注册会议并进行现场报告,未注册的论文将无法发表。

**请确保论文提交版本符合以下所有条件: **

• 论文必须为可打印的 PDF 格式。 • 论文包含页码编号。 • 论文支持黑白打印,确保论文图表使用黑白打印之后的可读性。 • 论文必须描述未在其他刊物上发表,且未在其他会议或期刊评审中的工作。 • 参考文献必须包括所有作者(例如,不要使用et al.)。

评审过程

评审过程是传统会议和期刊模式的混合。第一轮提交论文有三种可能的结果:

  • Accept with Shepherding: a PC member will shepherd every accepted paper to ensure that the reviewers' essential suggestions are incorporated into the article's final version. This is similar to the “Minor Revision” outcome at a journal.
  • One-shot Revision: This is similar to the “Major Revision” outcome in a journal. In such cases, the authors will receive a list of issues that must be addressed before the paper can be accepted. Authors may then submit a revision of the paper during the rebuttal period. The revision should include an author's response to the reviewers' issues as part of the article's appendix. If this paper's revision is not submitted within this time, then any resubmission will be treated as a new paper. The outcome after resubmission of a “one-shot revision” will either be “Accept with Shepherding” or “Reject.” The one-shot revision may be rejected, for example, if the reviewers find that the issues they raised were not satisfactorily addressed in the revision.
  • Reject: If the paper is rejected, it may not be resubmitted to any Bench deadline within 12 months following the paper's initial submission.

组织结构

Bench Steering Committees

• Prof. Dr. Jack Dongarra, University of Tennessee • Prof. Dr. Geoffrey Fox, Indiana University • Prof. Dr. D. K. Panda, The Ohio State University • Prof. Dr. Felix, Wolf, TU Darmstadt. • Prof. Dr. Xiaoyi Lu, University of California, Merced • Dr. Wanling Gao, ICT, Chinese Academy of Sciences & UCAS • Prof. Dr. Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil

About

国际测试基准与标准大会(Bench'21)征稿!