Harish Tayyar Madabushi (H-TayyarMadabushi)

H-TayyarMadabushi

Geek Repo

Company:The University of Bath

Location:Bath, UK

Home Page:https://www.harishtayyarmadabushi.com/

Twitter:@harish

Github PK Tool:Github PK Tool

Harish Tayyar Madabushi's repositories

Cost-Sensitive_Bert_and_Transformers

Transformers for Cost-Sensitive BERT for Generalisable Sentence Classification on Imbalanced Data

Language:PythonLicense:Apache-2.0Stargazers:19Issues:1Issues:0

SemEval_2022_Task2-idiomaticity

Data and preprocessing scripts for SemEval 2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding

Language:PythonLicense:GPL-3.0Stargazers:14Issues:2Issues:2

AStitchInLanguageModels

Data and Baselines for AStitchInLanguageModels dataset

Language:PythonLicense:GPL-3.0Stargazers:12Issues:3Issues:1

CxGBERT-BERT-meets-Construction-Grammar

Construction Grammar based BERT

Language:PythonLicense:GPL-3.0Stargazers:12Issues:2Issues:0

XML-Smart

A smart, easy and powerful way to access/create XML files/data (Perl).

Language:PerlStargazers:3Issues:3Issues:0
Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0

Abstraction-not-Memory-BERT-and-the-English-Article-System-NAACL-2022

Article prediction is a task that has long defied accurate linguistic description. As such, this task is ideally suited to evaluate models on their ability to emulate native-speaker intuition. To this end, we compare the performance of native English speakers and pre-trained models on the task of article prediction set up as a three way choice (a/an, the, zero). Our experiments with BERT show that BERT outperforms humans on this task across all articles. In particular, BERT is far superior to humans at detecting the zero article, possibly because we insert them using rules that the deep neural model can easily pick up. More interestingly, we find that BERT tends to agree more with annotators than with the corpus when inter-annotator agreement is high but switches to agreeing more with the corpus as inter-annotator agreement drops. We contend that this alignment with annotators, despite being trained on the corpus, suggests that BERT is not memorising article use, but captures a high level generalisation of article use akin to human intuition.

Language:RLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Adjudicating-LLMs-as-PropBank-Annotators

Data and model outputs for the paper: Adjudicating LLMs as PropBank Annotators

License:GPL-3.0Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:2Issues:0

c2xg

A Python package for learning, evaluating, annotating, and extracting vector representations of construction grammars

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Construction_Grammar_Schematicity_Corpus-CoGS

The Construction Grammar Schematicity (``CoGS'') corpus consists of 10 distinct English constructions, where the constructions vary with respect to schematicity.

Stargazers:0Issues:1Issues:0

Emergent_Abilities_and_in-Context_Learning

Emergent Abilities in Large Language Models just In-Context Learning

Stargazers:0Issues:1Issues:0

HTML_Miner

HTML::Miner - This Module 'Mines' (hopefully) useful information for an URL or HTML snippet.

Language:PerlStargazers:0Issues:2Issues:0

ner-bert

BERT-NER (nert-bert) with google bert https://github.com/google-research.

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

Net-XMPP-Client-GTalk

Net::XMPP::Client::GTalk - This module provides an easy to use wrapper around the Net::XMPP class of modules for specific access to GTalk ( Both on Gmail and Google Apps ).

Language:PerlStargazers:0Issues:2Issues:0
Stargazers:0Issues:0Issues:0

sentence-transformers

Multilingual Sentence & Image Embeddings with BERT

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

WWW-PerlMonks

This (Perl) module provides access to PerlMonks.

Language:PerlStargazers:0Issues:2Issues:0