There are 5 repositories under azureml topic.
Python notebooks with ML and deep learning examples with Azure Machine Learning Python SDK | Microsoft
Official community-driven Azure Machine Learning examples, tested with GitHub Actions.
MLOps using Azure ML Services and Azure DevOps
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...
Operationalize a video anomaly detection model with Azure ML
Azure Machine Learning SDK for R
Azure MLOps (v2) solution accelerators. Enterprise ready templates to deploy your machine learning models on the Azure Platform.
Official Azure Reference Architectures for AI workloads
Azure MLOps (v2) solution accelerators. Enterprise ready templates to deploy your machine learning models on the Azure Platform.
Architecture for deploying real-time scoring of machine learning models using Azure Machine Learning
Distributed Deep Learning using AzureML
ML DevOps using GitHub Actions and Azure Machine Learning
A workshop for doing MLOps on Azure Machine Learning
Deploying a Batch Scoring Pipeline for Python Models
The InnerEye-Gateway is a Windows service that acts as a DICOM end point to run inference on https://github.com/microsoft/InnerEye-DeepLearning models.
Get started with Automated Machine Learning (AutoML) and Machine Learning Operations (MLOps) in Azure Machine Learning
This repo has some proposed agenda for Azure Machine Learning related hands-on workshops.
Samples for fine-tuning HuggingFace models with AzureML
Natural Language Processing. From data preparation to building model and deploy the model to web service
A Repository for the public preview of Responsible AI in AML vNext
Create an environment within AzureML that supports Deepspeed training, execute some example training processes thereon.
In tune with conventional big data and data science practitioners’ line of thought, currently causal analysis was the only approach considered for our demand forecasting effort which was applicable across the product portfolio. Experience dictates that not all data are same. Each group of data has different data patterns based on how they were sold and supported over the product life cycle. One-methodology-fits-all is very pleasing from an implementation of view. On a practical ground, one must consider solutions for varying needs of different product types in our product portfolio like new products both evolutionary and revolutionary, niche products, high growth products and more. With this backdrop, we have evolved a solution which segments the product portfolio into quadrants and then match a series of algorithms for each quadrant instead of one methodology for all. And technology stack would be simulated/mocked data(Hadoop Ecosystem) > AzureML with R/Python > Zeppelin.
Run DAGs of MLflow Projects in Kubernetes using Concurrent for MLflow