sereph / TUTA_table_understanding

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Table Understanding with Tree-based Attention (TUTA)

Please keep tuned after we complete the internal process of publishing TUTA's model and code. Welcome to contact us for more technique details and discussions: zhiruow@andrew.cmu.edu, hadong@microsoft.com

🍻 Updates

  • 2022-01-09: Cell type classification.

  • 2021-10-29: Code of TUTA.

  • 2021-9-2: We released HiTab, a large dataset on question answering and data-to-text over complex hierarchical tables.

  • 2021-8-17: We presented our work in KDD'21.

  • 2020-10-21: We released our paper on arXiv.

Models

We provide three variants of pre-trained TUTA models: TUTA (-implicit), TUTA-explicit, and TUTA-base. These pre-trained TUTA variants can be downloaded from:

Training

To run pretraining tasks, simply run

python train.py                                           \
--dataset_paths="../dataset.pt"                              \
--pretrained_model_path="${tuta_model_dir}/tuta.bin"      \
--output_model_path="${tuta_model_dir}/trained-tuta.bin"

# to enable a quick test, one can run
python train.py  --batch_size 1  --chunk_size 10  --buffer_size 10  --report_steps 1  --total_steps 20

# to enable multi-gpu distributed training, additionally specify 
--world_size 4  --gpu_ranks 0 1 2 3

Do make sure that the number of input dataset_paths is no less that the world_size (i.e. number of gpu_ranks).
One can find more adjustable arguments in the main procedure.

Downstream tasks

Cell Type Classification (CTC)

To perform the task of cell type classification at downstream:

  • for data processing, use SheetReader in the reader.py and CtcTokenizer in the tokenizer.py;
  • for fine-tuning, use the CtcHead and TUTA(base)forCTC in the ./model/ directory.

Table Type Classification (TTC)

To perform the task of table type classification at downstream:

  • for data processing, use SheetReader in the reader.py and TtcTokenizer in the tokenizer.py;
  • for fine-tuning, use the TtcHead and TUTA(base)forTTC in the ./model/ directory.

For an end-to-end trial, run:

python ctc_finetune.py                                           \
--folds_path="${dataset_dir}/folds_deex5.json"                    \
--data_file="${dataset_dir}/deex.json"                            \
--pretrained_model_path="${tuta_model_dir}/tuta.bin"             \
--output_model_path="${tuta_model_dir}/tuta-ctc.bin"              \
--target="tuta"                                                   \
--device_id=0                                                   \
--batch_size=2                                                   \
--max_seq_len=512                                                 \
--max_cell_num=256                                                 \
--epochs_num=40                                                   \
--attention_distance=2                                             

A preprocessed dataset of DeEx can be downloaded from:

Data Pre-processing

For a sample raw table file input, run

# for SpreadSheet
python prepare.py                          \
--input_dir ../data/pretrain/spreadsheet   \
--source_type sheet                        \
--output_path ../dataset.pt

# for WikiTable
python prepare.py                                      \
--input_path ../data/pretrain/wiki-table-samples.json  \
--source_type wiki                                     \
--output_path ../dataset.pt

# for WDCTable
python prepare.py                         \
--input_dir ../data/pretrain/wdc          \
--source_type wdc                         \
--output_path ../dataset.pt

will generate a semi-processed version for pre-training inputs.

Input this data file as an argument into the pre-training script, then the data-loader will dynamically process for three pre-training objectives, namely Masked Language Model (MLM), Cell-Level Cloze(CLC), and Table Context Retrieval (TCR).

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

License:MIT License


Languages

Language:Python 100.0%