ultralytics / hub

Ultralytics HUB tutorials and support

Home Page:https://hub.ultralytics.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HELP

luisjun1203 opened this issue Β· comments

Search before asking

Question

"I'm a Korean student studying artificial intelligence these days, and while studying YOLO, I've come up with a model I want to create but I'm struggling with implementation. I would appreciate it if you could help me. What I want to do now is extract only the characteristics of the video data when I input video data using YOLO , but I don't know how to do it."

Additional

Thank you

πŸ‘‹ Hello @luisjun1203, thank you for raising an issue about Ultralytics HUB πŸš€! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a πŸ› Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@luisjun1203 hello! It's great to hear about your interest in artificial intelligence and YOLO. If you're looking to extract features from video data using YOLO, you would typically run the YOLO model on each frame of the video to get the desired outputs, which could include object bounding boxes, class labels, and confidence scores.

To achieve this, you would:

  1. Split your video into individual frames.
  2. Run the YOLO model on each frame to detect objects.
  3. Collect the output data (features) such as the coordinates of bounding boxes, class IDs, and confidence levels.
  4. Optionally, you can aggregate or further process these features depending on your specific requirements.

Remember to check the Ultralytics HUB Docs for guidance on using the model and for any specific commands or configurations that might be helpful for your project.

Keep exploring and best of luck with your model! If you have more specific questions as you progress, feel free to reach out. πŸ˜ŠπŸ‘

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐