1407arjun / EnvisionBuddy

An app that simplifies visualization and fosters learning, using Machine Learning to extract keywords from the scanned text and suggest AR models related to the concept the student is studying.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Envision Buddy - A Team Inversion Project

Abstract

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information. We have already seen its varied applications in tours, medical training, modeling, etc. Usage of AR in education is limited but it is expanding at a faster pace. It’s the age of learning more by visualization and less by reading. Nowadays educational institutions have upgraded their teaching methods to employ more AVs in the classroom. But these are in 2D and do not give freedom to the student to visualize it in the way they understand, a mindset that differs from student to student.

Objective

One of the other fields where the usage of AR is growing is classroom education. But what happens to learn beyond the classroom? The prime objective of this project is to help students learn and understand concepts in a much better and streamlined manner (beyond what was taught using conventional classroom methods), quashing any doubts that linger in the minds of students due to the lack of visualizing capacity.

Implementation

As said above, what if a student wants to instantly visualize in 3D a particular chemical structure or a particular phenomenon from all possible angles or enlarging or rotating it as per his/her needs? We are going to achieve this using Augmented Reality and what better platform can we deploy it in than smartphones, which just isn’t a do-away these days. Students can visualize any model in 3D and also in any orientation that they wish using the app.

Novelty

Some may argue that he/she can refer to videos on YouTube or similar websites or even search on the Internet for the same. But at the same time, one must note that one cannot rotate or resize the 3D models alias the video on YouTube according to their wish. They must move the video back and forth to achieve this which can sometimes be frustrating. In this app, all a student needs to do is use his/her fingers to zoom (pinch) or rotate (swipe). While there may be many AR apps that can achieve this, the feature that makes our app stand out is that it can identify which model you may require for a particular concept. All you need to do is point the phone towards the book so that it can identify the concept and display the relevant AR model. No searches needed, no video scrubbing needed…all that's between you can your 3D model is just a single scan.

Workflow

  1. The user scans the paragraph/title/image caption related to the concept he/she is studying.
  2. The app captures the image and identifies the text in it.
  3. Using machine learning algorithms, the app detects the keywords present in the detected text and thus predicts the concept that the user is studying.
  4. The system then pulls out the relevant 3D model from the wide range of models available matching the requirement of the user as predicted by the ML algorithm.
  5. The user has to just hold his/her device in a manner that a surface can be detected by the app.
  6. And it's done! The 3D model will now appear on the users surface and then he/she can manually rotate it or resize it as per his/her wish and also, move with their phone throughout the place so that they can visualize the model from all places if they wish to do so (rather than rotating the model), and thus can analyze and understand the concept in-depth and forever.

Future scope

The app, for now, runs on a cloud-based system i.e one has to be connected to the internet to view the 3D models. In the future, we plan to group these models subject/course-wise and allow the user to download the models for those subjects which they wish to view. This would facilitate offline learning and at the same time, the user need not have an active internet connection, thus removing the barrier of a good and active internet connection between the user and learning. This app can be very useful for those who are following self-learning practices or those who cannot afford to go to the big coaching institutes (where advanced teaching techniques are applied).

Tech stack

  1. Android Studio
  2. Google ML Kit
  3. AR Core
  4. Echo AR

Note

The repository may be committed several times in order update the URL of the model present in ChoiceActivity.java file, since it is hosted on a localhost server whose URL refreshes every two hours. This approach will soon be replaced by either a cloud-based or a local ML model implementation using TensorFlow Lite models.

App Icon Credits: https://cdn4.iconfinder.com/data/icons/smart-technology-indigo-vol-1/256/AR_Technology-512.png

About

An app that simplifies visualization and fosters learning, using Machine Learning to extract keywords from the scanned text and suggest AR models related to the concept the student is studying.


Languages

Language:Java 100.0%