-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difference on Inferencing the custom models #9507
Comments
@Sanath1998 👋 Hello! Thanks for asking about handling inference results. YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using Simple Inference ExampleThis example loads a pretrained YOLOv5s model from PyTorch Hub as import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # yolov5n - yolov5x6 official model
# 'custom', 'path/to/best.pt') # custom model
# Images
im = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, URL, PIL, OpenCV, numpy, list
# Inference
results = model(im)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
results.xyxy[0] # im predictions (tensor)
results.pandas().xyxy[0] # im predictions (pandas)
# xmin ymin xmax ymax confidence class name
# 0 749.50 43.50 1148.0 704.5 0.874023 0 person
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
results.pandas().xyxy[0].value_counts('name') # class counts (pandas)
# person 2
# tie 1 See YOLOv5 PyTorch Hub Tutorial for details. Good luck 🍀 and let us know if you have any other questions! |
As u had stated above that model = torch.hub.load('ultralytics/yolov5', 'yolov5s') uses pretrained model to load. |
@Sanath1998 I've already explained this in my previous message. |
Thanks @glenn-jocher :) |
HI @glenn-jocher, |
👋 Hello, thanks for asking about the differences between train.py, detect.py and val.py in YOLOv5 🚀. These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, val.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each: train.py
val.py
detect.py
YOLOv5 PyTorch Hub InferenceYOLOv5 PyTorch Hub models are Lines 276 to 282 in 7ee5aed
Good luck 🍀 and let us know if you have any other questions! |
Hi @glenn-jocher , Time Taken to do inference: |
@Sanath1998 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem. How to create a Minimal, Reproducible ExampleWhen asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem. Thank you! 😃 |
Search before asking
Question
Hi @glenn-jocher,
Actually I wanted to know the difference between detect.py while doing the inference on the images from the trained custom model to just directly inferencing from one line (results = model(imgs)).
Is there any specific reason behind doing one line inference like results = model(imgs) by just passing images to model() api.
Can u please get me a glimpse of it.
Reference link for one line inference "#2703 (comment)"
Additional
No response
The text was updated successfully, but these errors were encountered: