Margarita Bugueño's repositories

GRTC_GNNs

Public repository of our paper accepted to the Findings of EMNLP 2023: "Connecting the Dots: What Graph-Based Text Representations Work Best for Text Classification using Graph Neural Networks?"

Language:Jupyter NotebookStargazers:2Issues:1Issues:0

Transformer_as_ensemble

Public repository of paper " Learning to combine classifiers outputs with the transformer for text classification"

Language:Jupyter NotebookStargazers:2Issues:2Issues:0
Stargazers:0Issues:2Issues:0

DataMining

Tareas_varias

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

graphing-a-decision

Public repository of our paper "Graphing a Decision: a Survey for Explainability on Graph-based Learning Models"

Stargazers:0Issues:0Issues:0

ML

tareas ñanculef

Language:Jupyter NotebookStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:2Issues:0
Language:Jupyter NotebookStargazers:0Issues:2Issues:0

PIIC19

Public repository of our works in Exoplanet analysis with Deep Learning

Language:Jupyter NotebookStargazers:0Issues:4Issues:0

Poster

Auxiliar

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

ProyectoIA-BEP

Proyecto final para Inteligencia Articial, Elizabeth Montero

Language:C++Stargazers:0Issues:0Issues:0

ProyectoML

Poster, presentación magister

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

SIMAHcomp

Public repository of our 1st place work at the SIMAH competition held at ECML-PKDD 2019

Language:Jupyter NotebookStargazers:0Issues:2Issues:0
Stargazers:0Issues:2Issues:0

Tarea1ML

Desde cero juampi

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

Tarea2ML

Clasificadores en sklearn, fronteras, LDA/QDA/PCA, hiper parámetros.

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

Tarea3ML

Métodos No-Lineales

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

TEST

git commands

Stargazers:0Issues:2Issues:0

TransForE

Introducimos Transformer For Ensemble (TransForE), un método basado en Transformer para trabajar problemas de clasificación de texto de múltiples clases con un fuerte desequilibrio de etiquetas a fin de combinar el aprendizaje de múltiples modelos base a partir de las salidas de ellos, así como el texto mismo, en una especie de máquina de ensamblado parametrizada cuyo propósito es mejorar, o al menos mantener, la eficacia de los modelos base utilizados. TransForE utiliza los conocidos módulos de auto-atención de múltiples cabezales, propio de Transformer, con el propósito de aprender a combinar las múltiples componentes de entrada.

Language:Jupyter NotebookStargazers:0Issues:2Issues:0