Artur Begyan (arturbeg)

arturbeg

Geek Repo

Company:Reya Labs

Location:London, United Kingdom

Home Page:reya.network

Twitter:@arturbegyan

Github PK Tool:Github PK Tool

Artur Begyan's repositories

reinforcement_learning_oe

The work aims to explore Value based, Deep Reinforcment Learning (Deep Q-Learning and Double Deep Q-Learning) for the problem of Optimal Trade Execution. The problem of Optimal Trade Execution aims to find the the optimal "path" of executing a stock order, or in other words the number of shares to be executed at different steps given a time constraint, such that the price impact from the market is minimised and consequently revenue from executing a stock order maximised.

google_trends_consumption_prediction

This work investigates the forecasting relationship between a Google Trends indicator and real private consumption expenditure in the US. The indicator is constructed by applying Kernel Principal Component Analysis to consumption-related Google Trends search categories. The predictive performance of the indicator is evaluated in relation to two conventional survey-based indicators: the Conference Board Consumer Confidence Index and the University of Michigan Consumer Sentiment Index. The findings suggest that in both in-sample and out-of-sample nowcasting estimations the Google indicator performs better than survey-based predictors. The results also demonstrate that the predictive performance of survey-augmented models is no different than the power of a baseline autoregressive model that includes macroeconomic variables as controls. The results demonstrate an enormous potential of Google Trends data as a tool of unmatched value to forecasters of private consumption.

Language:PythonStargazers:39Issues:1Issues:0

fractal_flutter

Fractal is an ML-powered network of interconnected public chats that allows branching of chats into more focused “sub-chats”, thereby overcoming the problem of rapid conversation subject dilution and low engagement. Fractal aims to allow unacquainted individuals to spontaneously find and discuss niche topics of common interest in real-time.

Language:DartStargazers:26Issues:0Issues:0

efficient_transformer

Scaling Transformer architectures has been critical for pushing the frontiers of Language Modelling (LM), a problem central to Natural Language Processing (NLP) and Language Understanding. Although there is a direct positive relationship between the Transformer capacity and its LM performance, there are practical limitations which make training massive models impossible. These limitations come in the form of computation and memory costs which cannot be solely addressed by training on parallel devices. In this thesis, we investigate two approaches which can make Transformers more computationally and memory efficient. First, we introduce the Mixture-of-Experts (MoE) Transformer which can scale its capacity at a sub-linear computational cost. Second, we present a novel content-based sparse attention mechanism called Hierarchical Self Attention (HSA). We demonstrate that the MoE Transformer is capable of achieving lower test perplexity values than a vanilla Transformer model with higher computational demands. Language Modelling experiments, involving a Transformer which uses HSA in place of conventional attention, revealed that HSA can speed up attention computation by up to 330% at a negligible cost in model performance.

Language:PythonStargazers:2Issues:1Issues:0

fractal-angular-prod

Fractal is an ML-powered network of interconnected public chats that allows branching of chats into more focused “sub-chats”, thereby overcoming the problem of rapid conversation subject dilution and low engagement. Fractal aims to allow unacquainted individuals to spontaneously find and discuss niche topics of common interest in real-time.

Language:TypeScriptStargazers:1Issues:0Issues:0

PLAsTiCC-Astronomical-Classification-Solution

PLAsTiCC is a large data challenge that attempts to classify astronomical objects by analysing the time series measurements of the ‘light curves’ data (intensity of photon flux) emitted by cosmological objects using six different astronomical passbands . The flux may be decreasing or 1 increasing over time, the pattern of the changes in its brightness acts as a good indicator of the underlying object. Each object in the set of training data belongs to one of the 14 classes, in the test data there is one additional 15th class that is meant to capture “novelties” (objects that are hypothesised to exist).

Language:PythonStargazers:1Issues:1Issues:0
Language:Jupyter NotebookStargazers:1Issues:1Issues:0
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

bubble

Speech bubble for Flutter

Language:DartLicense:BSD-2-ClauseStargazers:0Issues:0Issues:0

cs224u

Code for Stanford CS224u

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

django_realtime_reddit_clone_archive

A real-time reddit clone build just using Django (web-socket functionality is implemented via Django Channels). Archive repository migrated from BitBucket

Language:JavaScriptStargazers:0Issues:1Issues:0
Language:PythonStargazers:0Issues:1Issues:0

fast-transformers

Pytorch library for fast transformer implementations

Language:PythonStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

hn_app

The HN reader app developed live on The Boring Flutter Development Show

Language:DartLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

mac-network

Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

merkle-distributor

📦 A smart contract that distributes a balance of tokens according to a merkle root

Language:TypeScriptLicense:GPL-3.0Stargazers:0Issues:0Issues:0

MixMatch-pytorch

Code for "MixMatch - A Holistic Approach to Semi-Supervised Learning"

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

ml_from_scratch

A collection of machine learning models, procedures, etc... implemented from scratch

Stargazers:0Issues:1Issues:0

openzeppelin-contracts

OpenZeppelin Contracts is a library for secure smart contract development.

Language:JavaScriptLicense:MITStargazers:0Issues:0Issues:0
Language:Jupyter NotebookStargazers:0Issues:1Issues:0

streetlearn

A C++/Python implementation of the StreetLearn environment based on images from Street View, as well as a TensorFlow implementation of goal-driven navigation agents solving the task published in “Learning to Navigate in Cities Without a Map”, NeurIPS 2018

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Language:SolidityLicense:MITStargazers:0Issues:0Issues:0

tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:2Issues:0

timy-messenger

Timy - open source mobile app for groups to communicate and organize themselves. Built with flutter.

Language:DartLicense:Apache-2.0Stargazers:0Issues:1Issues:0

tutorials

PyTorch tutorials.

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

universal-router

Uniswap's Universal Router for NFT and ERC20 swapping

Language:TypeScriptLicense:GPL-3.0Stargazers:0Issues:0Issues:0

visor-core

The DeFi protocol for Active Liquidity Management. Building on Uniswap v3.

Language:SolidityLicense:NOASSERTIONStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:1Issues:0

yieldAggregators

SoK - Yield Aggregators in DeFi

Language:PythonStargazers:0Issues:0Issues:0