theassembly's repositories

Authenticate-via-TensorFlow-Facial-Recognition-in-Flutter

The power of machine learning allows us to change long-standing computing paradigms. One of these is the age-old password-based authentication system common to most apps. With fast real-time facial recognition, we can easily dispense with text-based verification and allow users to log in just by showing their faces to a webcam. In this session, we’ll show how to do this in Flutter, Google’s popular open-source UI toolkit for developing apps for web, Android, iOS, Fuchsia, and many other platforms with a single codebase. We’ll first build a simple authentication-based Android app, and then deploy the Firebase ML Vision model for face ID & image processing; as well as the MobileFaceNet CNN model through TensorFlow Lite for structured verification. Once all these parts are in place, our solution will work seamlessly and can easily be ported to other apps. Pre-requisites: ✅ Android Studio (https://developer.android.com/studio) — you can also use other IDEs/platforms if you’d rather not use Android - Flutter documentation below guides on the same. ✅ Flutter SDK (https://flutter.dev/docs/get-started/install) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #TensorFlow #Flutter #MachineLearning

Build-An-AI-Virtual-Mouse-With-OpenCV

In our continuing deep dive into practical real-time computer vision, we’ll show you how to code a hands-free webcam-based controller for your computer mouse using the OpenCV library on Python. This will allow you to control your computer without any physical peripheral required—Iron Man style! In this session, we’ll first obtain our live camera feed using OpenCV and then estimate hand poses using MediaPipe Hands, an open-source framework that employs machine learning to infer 3D landmarks of the hand from single frames in real-time without any fancy hardware acceleration, working even on mobile phones. Following this, we’ll set up our simulated mouse movement in response to the poses using the AutoPy automation module. Prerequisites: ✅ Python (latest release: https://www.python.org/downloads/release/python-395/) ✅ PyCharm (https://www.jetbrains.com/pycharm/download/) or any other Python code editor ✅ pip install: OpenCV (https://pypi.org/project/opencv-python/), MediaPipe, AutoPy ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings

Real-time-OCR-Text-To-Speech-with-Tesseract

Tesseract is a cross-OS optical character recognition (OCR) engine developed by HP in the 1980s, and since 2006, maintained by Google as an open-source project with high marks for accuracy in reading raw image data into digital characters. The project has been continuously developed and now offers OCR supported by LSTM neural networks for highly improved results. In this session, we’ll use the Python wrapper for Tesseract to first test drive OCR on images through code before connecting our solution to a live IP video feed from your smartphone processed through OpenCV, and then translating the resultant text stream into audible form with gTTS (Google Text-To-Speech), enabling our mashup program to automatically read out loud from any script it ‘sees’. Prerequisites: —Python IDE such as PyCharm (https://www.jetbrains.com/pycharm) —The Tesseract engine (https://tesseract ocr.github.io/tessdoc/Home.html) —A smartphone configured as an IP Webcam (https://www.makeuseof.com/tag/ip-webcam-android-phone-as-a-web-cam/) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #OCR #TextToSpeech #Tesseract

Language:PythonStargazers:17Issues:2Issues:0

Build-an-e-commerce-store-with-Django

Django is an extremely popular open-source Python-based web framework, designed to ease the creation of complex, database-driven websites with reusable pluggable components. Django has been famously used for sites such as Instagram, Mozilla, Disqus, and Clubhouse. In this workshop, we’ll deploy Django to create our own E-commerce storefront, which allows people to buy items with or without an account, combining the use of a database with cookies for anonymous usage. Aside from Django for the core functionality, we’ll use HTML/CSS/Javascript to improve our user experience as well as integrate with the PayPal API to handle purchase payments. Prerequisites: ✅ Visual Studio Code (https://code.visualstudio.com/download) ✅ Python (https://www.python.org/downloads) ✅ PayPal developer account (https://developer.paypal.com/docs/get-started/) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #Django #PayPal #Python

Language:PythonStargazers:12Issues:4Issues:0

Automate-WhatsApp-with-Selenium

Facebook’s WhatsApp is almost universal as the messaging service of choice on our mobile devices, with over 2 billion users worldwide. In this session, we’ll show you how to schedule and automate message sending to multiple people on WhatsApp using the Selenium web framework. Selenium is a very powerful library for browser automation and we will use its Chrome driver capabilities through Python to set up a bridge to WhatsApp’s desktop version. Prerequisites: —Python (https://www.python.org/downloads/) —Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Social media: —Instagram: http://instagram.com/makesmartthings —Facebook: http://fb.com/makesmartthings —Twitter: http://twitter.com/makesmartthings #Python #WebAutomation #WhatsApp

Language:PythonStargazers:9Issues:3Issues:0

Code-A-Rich-Text-Editor-With-PyQt

After last month’s workshop with Tkinter, we’ll show you another great new option for developing GUIs in Python. PyQt5 is the latest iteration of the Python binding for the cross-platform Qt GUI toolkit, implemented as a plugin. In this session, we’ll use PyQt5 to code our own customizable rich text editor a la Microsoft Word. We’ll implement file system interactions, selective text formatting, and other features one expects from a modern word processor. Prerequisites: — Python (https://www.python.org/downloads/) — Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings

Language:PythonStargazers:9Issues:4Issues:0

Turn-Any-PDF-into-an-Audiobook-

In this session, we'll show you how to use Python to automagically turn a PDF into an audiobook, without anyone needing to read the contents out loud to procure the audio. To achieve this, we'll use a few separate Python libraries—namely Pyttsx3 (for speech to text) and PyPDF2 (to parse PDF files)—and show you how to put it all together to obtain downloadable audio from your PDF input in a single command. We'll also demonstrate how you can customize this process to modulate output voice and speed. This technique can easily be then further refined for nuances of text and speech using other libraries and programming (including NLP/machine learning-based ones) Prerequisites: —Python (https://www.python.org/downloads/) —Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #Python #Tutorial

Language:PythonStargazers:6Issues:2Issues:0

Build-a-Custom-Python-IDE-with-Tkinter

In this session, we’ll go meta as we build our own Python IDE in Python via the Tkinter toolkit. Tkinter is Python’s de-facto standard GUI and is included with installs across most operating systems. This workshop is one for all the coding ninjas who would like an environment perfectly suited to their needs and stripped of any unnecessary features that come with out-of-the-box IDEs like PyCharm & VSCode—just keeping a basic editor & compiler with a simple interface that can handle files—which can then be customized with unique functionalities of our choosing. Prerequisites: —Python (https://www.python.org/downloads/) —Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Social media: —Instagram: http://instagram.com/makesmartthings —Facebook: http://fb.com/makesmartthings —Twitter: http://twitter.com/makesmartthings #PythonIDE #DIY

Firechat

Android chat app with Firebase and Google Translate

Language:JavaStargazers:5Issues:0Issues:0

Detect-Emotions-in-Real-Time-with-OpenCV

In a previous session in March, we showed you how to train a CNN (Convolutional Neural Network) using TensorFlow to detect human emotions from facial expressions with great accuracy (link to session: https://youtu.be/ctjkZnQF_FY) In this workshop, we’ll take our deep learning model live by integrating it with OpenCV to process real-time video. We’ll capture expressions directly from the webcam and run them through our CNN to get a reading on mood and emotion instantly. Prerequisites: —Python (https://www.python.org/downloads/) —Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Social media: —Instagram: http://instagram.com/makesmartthings —Facebook: http://fb.com/makesmartthings —Twitter: http://twitter.com/makesmartthings #OpenCV #DataScience

Automate-Spotify-Playlists-In-Python

In this session, we’ll show you how to sync up your YouTube and Spotify playlists automatically. We’ll do this by using both the YouTube and Spotify APIs through Python with an assist from the open source YouTube_dl tool. Our solution will iterate through entries in our existing YouTube playlist, parse the data into song and artist name for each, run a background search on Spotify for the same, and create a new Spotify playlist with the same tracks for us to listen to - seamlessly replicating from one platform to the other at the click of a button. Prerequisites: ✅ Python (https://www.python.org/downloads/) ✅ PyCharm (https://www.jetbrains.com/pycharm/download/) or any other code editor for Python ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings

Language:PythonStargazers:3Issues:2Issues:0

Predict-Twitter-Personality-Types-with-Machine-Learning

In this session, we’ll show you how to use machine learning to analyse a person’s personality based on their real-time Twitter feed. Specifically, we’ll first use the Twitter API to procure our input data before running a naive Bayes classifier to train and test our model, which will then be able to categorize new Twitter profiles live into a Myers-Briggs Type Indicator (MBTI). Most of you will already be familiar with this popular metric, which uses a 4 letter result (such as INFJ or ENFP) to summarize different personality characteristics in terms of how individuals perceive the world and make decisions. Normally, this is derived through questionnaires and psychometric tests administered to each person, but here we’ll automatically get a result at the click of a button. Prerequisites: —JupyterLab (https://jupyter.org/) —Twitter Developer Account (https://developer.twitter.com/en/dashboard) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #MachineLearninng #JupyterLab

Language:Jupyter NotebookStargazers:3Issues:2Issues:0

Animate-the-Web-with-Three.js

Three.js is a revolutionary open-source cross-platform JavaScript API and library that is used to create and display 3D computer graphics in a desktop web browser using WebGL, working at a high level to create GPU-accelerated animations without the need for plugins or external applications. The library comes packed with features not commonly associated with the web, including on-the-fly lighting effects, multiple camera angles, object geometry & material manipulation, and even VR/AR support through WebXR. In this session, we’ll cover the basics of the library and demonstrate how to code websites with 3D backgrounds, as well as show you how to bring 3D models onto the page for a whole new end-user experience you might not have thought possible before. Three.js is a revolutionary open-source cross-platform JavaScript API and library that is used to create and display 3D computer graphics in a desktop web browser using WebGL, working at a high level to create GPU-accelerated animations without the need for plugins or external applications. The library comes packed with features not commonly associated with the web, including on-the-fly lighting effects, multiple camera angles, object geometry & material manipulation, and even VR/AR support through WebXR. In this session, we’ll cover the basics of the library and demonstrate how to code websites with 3D backgrounds, as well as show you how to bring 3D models onto the page for a whole new end-user experience you might not have thought possible before. Prerequisites: ✅ Visual Studio Code (https://code.visualstudio.com/download) ✅ Basic knowledge of JavaScript and web programming ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #3DAnimation #Three.js

Language:JavaScriptStargazers:1Issues:0Issues:0

Code-a-Gesture-Controlled-Snake-Game

Everyone remembers Snake, the very simple but highly addictive game that came preloaded on our old Nokia mobile phones - the objective was to maneuver a ‘snake’ (just a line of pixels that acted as its own primary obstacle and kept growing as the game went on) in a bordered plane in pursuit of items to ‘eat’ (more pixels). In this session, we’ll show you not only how to develop the game anew in Python using Pygame, but also add a unique twist: we’ll control it solely with gestures without the need for tactile input of any sort. To do this, we’ll build upon computer vision techniques (via OpenCV and MediaPipe) we used in earlier sessions for hand pose estimation and gesture control. Prerequisites: ✅ Visual Studio Code (https://code.visualstudio.com/download) ✅ Python (https://www.python.org/downloads) ✅ pip install: OpenCV (https://pypi.org/project/opencv-python/), Pygame (https://pypi.org/project/pygame/), MediaPipe (https://pypi.org/project/mediapipe/) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings

Language:PythonStargazers:1Issues:2Issues:0

ElectronToDoApp

We built a multiwindow To-Do app using Electron and Javascript

Language:JavaScriptStargazers:1Issues:1Issues:0

MERN

Full Stack Development For Startups: MERN

Language:JavaScriptStargazers:1Issues:1Issues:0

opencv

Open Source Computer Vision Library

Language:C++License:Apache-2.0Stargazers:1Issues:1Issues:0

Translate-Sign-Language-in-Real-time-with-TensorFlow

In this session, we’ll build a solution that detects American Sign Language (ASL) gestures via a webcam and translates them into written English in real-time via a neural network. To achieve this, we’ll mashup a few different libraries and tools and show you how to use each - starting with OpenCV & Python once again to procure live images and build our own labelled gesture data set. Following this, we’ll train and test our model on TensorFlow through transfer learning (using SSD MobileNet), during the course of which we’ll show you how to use the TensorFlow Object Detection API. Once the model is ready, we’ll plug it back into Python and OpenCV to classify based on the real-time live feed from the webcam. Prerequisites: ✅ Jupyter Notebook (https://jupyter.org/install) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #AmericanSignLanguage #TensorFlow #MachineLearning

Language:Jupyter NotebookStargazers:1Issues:2Issues:1

Transmit-Morse-Code-with-Arduino

When Samuel Morse invented the telegraph in 1838, he also devised an alphabet for communication over his revolutionary system that came to be known as Morse Code. This was a major milestone in early telecommunications, enabling encrypted messaging that would eventually evolve into our modern-day Internet. Though superseded by other technology, Morse Code is still of keen interest to amateur radio enthusiasts and has utility in aeronautics and navigation - often used by ships at sea for light-based communication. In this session, we’ll show you how to encrypt text into Morse Code using the Arduino and then communicate it through light and sound via associated hardware. This simple encoder can easily be modified for communication across media such as Bluetooth, with a similar decoder setup to complete our messaging cycle. Prerequisites: Arduino IDE (https://www.arduino.cc/en/main/software) Hardware Required: —Arduino UNO —Buzzer —LED —220-ohm resistor —Jumper cables ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae —Our website: http://theassembly.ae —Instagram: http://instagram.com/makesmartthings —Facebook: http://fb.com/makesmartthings —Twitter: http://twitter.com/makesmartthings #PythonIDE #DIY

Language:C++Stargazers:1Issues:2Issues:0

Build-a-Smart-Cane-for-the-Visually-Impaired

In this workshop, we’ll show you one of the many ways DIY tech can be used to improve the everyday lives of people of determination. With a little bit of Arduino magic, we’ll demonstrate how you can modify a cane typically used by a visually impaired person, to give it the ability to sense out obstacles in the way (with the help of ultrasonics) and present audio cues to the user accordingly. Hardware required: —Arduino Uno —HC-SR04 Ultrasonic Sensors —Jumper Wires —DC Buzzer ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #Arduino #DIY

Language:C++Stargazers:0Issues:2Issues:0

Code-a-COVID-Density-Display-With-MapBox

MapBox is an American provider of custom online maps for websites and apps, combining slick stylization with robust data processing to supercharge visual representation of location data. It’s used by Foursquare, Lonely Planet, The Financial Times, The Weather Channel, Snapchat, and many others, with a vast array of niche customization that isn’t found in other providers like Google Maps - the company is currently valued at over $1 billion. In this session, we’ll show you how to get started with the MapBox API. We’ll take spreadsheet data on COVID cases and transform it into JSON to bring it into JavaScript where we’ll use the API to display COVID density for individual regions with color-coded markers on a custom world map. Prerequisites: ✅ Visual Studio Code (https://code.visualstudio.com/download) ✅ Basic knowledge of JavaScript and web programming ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #COVID19 #Mapbox

Language:HTMLStargazers:0Issues:2Issues:0

Code-a-Screen-Recorder-with-OpenCV

Screen recording is an essential productivity hack with the amount of work that depends on video communication these days. However, you usually need to turn to external apps for the same, and these always come with a range of caveats for usage. In this session, we’ll build our own customizable screen recorder software using Python with the OpenCV library. Prerequisites: —Basic Python knowledge —Visual Studio Code (https://code.visualstudio.com/download) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #OpenCV #DIY

Language:PythonStargazers:0Issues:2Issues:0

Make-an-Air-Canvas-with-OpenCV

The OpenCV library is one of the Assembly’s favourite toolkits, with its easy-to-use processing capabilities for real-time computer vision. Working seamlessly with Python, the open-source library has been very useful for processing live video capture on the fly with little overhead, delivering impressive results with minimal code. In this session, we’ll use OpenCV via Python to code our own air canvas, allowing you to doodle in thin air using hand gestures captured by the camera with results transferred to the screen directly and in real-time. Our 4-color palette will be represented by each of the fingers (our virtual ‘crayons’), with the fifth used as an eraser. Prerequisites: ✅ Python (latest release: https://www.python.org/downloads/release/python-395/) — We’ll use the pre-bundled Python IDLE as our environment for this session. ✅ OpenCV — Install using pip (https://pypi.org/project/opencv-python/) ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings

Language:PythonStargazers:0Issues:2Issues:0

Monitor-The-Weather-With-Arduino-IoT-Cloud

In 2019, Arduino announced the addition of the IoT Cloud service as part of its Create Online environment. This end-to-end, low-code cloud solution makes it easy for IoT enthusiasts and professionals to register supported devices and enable interaction/data flow with minimum fuss, integrating seamlessly with MQTT, IFTTT, WebSockets, and more. Objects connected to the platform can easily be put into IoT workflows and scaled up for the management of fleets of devices from a single online dashboard. In this session, we’ll show you how to set up for the IoT cloud with an Arduino MKR WiFi 1010 connected to a DHT11 sensor that will procure and transmit ambient temperature and humidity readings for a location. We’ll create an online dashboard to monitor these remotely in real-time before using the PushingBox cloud service to sync up with Google Sheets as well. Software Required: ✅ Arduino IDE ✅ Arduino IoT Cloud account Hardware required: ✅ Arduino MKR WiFi 1010 ✅ Breadboard ✅ Jumper cables ✅ DHT11 sensor ----------------------------------------- To learn more about The Assembly’s workshops, visit our website, social media or email us at workshops@theassembly.ae Our website: http://theassembly.ae Instagram: http://instagram.com/makesmartthings Facebook: http://fb.com/makesmartthings Twitter: http://twitter.com/makesmartthings #Arduino #DIY #IoT

Language:C++Stargazers:0Issues:2Issues:0
Language:JavaScriptStargazers:0Issues:1Issues:0
Language:JavaScriptStargazers:0Issues:2Issues:0
Language:JavaScriptStargazers:0Issues:1Issues:0
Language:C++Stargazers:0Issues:0Issues:0

UnityWorkshop1

Creating a simple 2D game.

Language:C#Stargazers:0Issues:0Issues:0

UnityWorkshop2

Creating a simple 3D game

Stargazers:0Issues:0Issues:0