iofu728 / Task

🤧Some tasks for competition by gunjianpan

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🤧Some tasks for competition by gunjianpan

❱ ❱❱ ❱❱❱ ❱❱❱❱ ❱❱❱❱ ❱❱❱❱❱ ❱❱❱❱❱❱ ❱❱❱❱❱❱❱


Name Classify Type Method Modify
NLP senior Task3: Chinese traditional Sequence Annotation Sequence annotation Classify BiLSTM-CRF, Bert-CRF 190626
DataMining Project Semantic Representation Classify LSTM-ATT 190620
NLP cbb Task2: Medicine Corpus processing with Corpus Sequence annotation Classify BiLSTM-CRF, CRF 190601
NLP senior Task2: SemEval2013-Task13-WordSenseInduction WordSenseInduction Cluster, LM BiLM + Clustering/WordNet 190528
Semantic Course Task3: NLPCC 2019 Task2 Semantic Parsing Generation Seq2Seq 190520
Deecamp 2019 Exam A Basic Knowledge Test - 190427
NLP cbb Task1: Medicine Corpus processing Linguistics Generate Jieba + Dict + Rules 190418
Semantic Task2: SemEval2017-Task4-SentimentAnalysis Sentiment Classify TextCNN & Bert 190414
NLP senior Task1: SemEval2018-Task7-RelationClassification Semantic Relation Multi Classify TextCNN & LR & LinearSV 190411
Concrete Feature Classify lightGBM 190328
Semantic Course Task1 Word Similarity Regression word2vec & Bert & WordNet 190324
Interview Feature Classify lightGBM 190318
Elo Feature Classify lightGBM 190304
Titanic Feature Classify lightGBM 181220

NLP senior Task3: Chinese traditional Sequence Annotation

DataMining Project

  • Task Detecting Incongruity Between News Headline and Body Text via a Deep Hierarchical Encoder
  • Final paper dataMining

NLP cbb Task2: Medicine Corpus processing with Corpus

NLP senior Task2: SemEval2013-Task13-WordSenseInduction

  • Task: SemEval2013-Task13-WordSenseInduction

    • Word Sense Induction (WSI) seeks to identify the different senses (or uses) of a target word in a given text in an automatic and fully-unsupervised manner. It is a key-enabling technology that aims to overcome the limitations associated with traditional knowledge-based & supervised Word Sense Disambiguation (WSD) methods, such as:
    • their limited adaptation to new languages and domains
    • the fixed granularity of senses
    • their inability to detect new senses (uses) not present in a given dictionary
  • Code: pku-nlp-forfun/SemEval2013-WordSenseInduction

  • Final paper: Multi-fusion on SemEval-2013 Task13: Word Sense Induction

  • Result: 11.06%/Fuzzy NMI, 57.72%/Fuzzy B-Cubed, 25.27%/Average

Semantic Course Task3: NLPCC 2019 Task2 Open Domain Semantic Parsing

  • Task: Senantic Parsing

    • In this task, a Multi-perspective Semantic ParSing (or MSParS) dataset will be released, which can be used to evaluate the performance of a semantic parser from different aspects. This dataset includes more than 80,000 human-generated questions, where each question is annotated with entities, the question type and the corresponding logical form. We split MSParS into a train set, a development set and a test set
  • Code: semantic/task3

  • Final paper: semantic/task3

    • Result: BLEU 0.538

DeeCamp 2019 Exam A

NLP cbb Task1: Medicine Corpus processing

  • Task: Chinese word segmentation, part-of-speech tagging, named-entity recognition in Rule
  • Code: nlpcbb/task1
  • Final paper: nlpcbb/task1

Semantic Course Task2: SemEval2017-Task4-SentimentAnalysis

NLP senior Task1: SemEval2018-Task7-RelationClassification


Semantic Course Task1


  • Task:

    • Description: This is a very simple binary classification task, you can design your model in any way you want.
    • Evaluation: AUC, area under the ROC curve
    • Data: The data file are provided in your answer submission folder in SharePoint (the link shown above).
    • train.csv: used to train model.
    • test.csv: used to metric your model's performance.
    • Feedback / Submission:
    • Your AUC score at the test dataSet.
    • A brief description of your model and feature engineering.
    • Your code.
  • Final result: Code record

  • Code: interview

  • Result: macro_f1: 84.707%/Train, 82.429%/Test




🤧Some tasks for competition by gunjianpan

License:MIT License


Language:Python 55.8%Language:TeX 22.6%Language:Jupyter Notebook 21.5%Language:Shell 0.1%