Reference for ultralytics/models/sam/model.py
Note
This file is available at https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/sam/model.py. If you spot a problem please help fix it by contributing a Pull Request 🛠️. Thank you 🙏!
ultralytics.models.sam.model.SAM
SAM(model: str = 'sam_b.pt')
Bases: Model
SAM (Segment Anything Model) interface class for real-time image segmentation tasks.
This class provides an interface to the Segment Anything Model (SAM) from Ultralytics, designed for promptable segmentation with versatility in image analysis. It supports various prompts such as bounding boxes, points, or labels, and features zero-shot performance capabilities.
Attributes:
Name | Type | Description |
---|---|---|
model |
Module
|
The loaded SAM model. |
is_sam2 |
bool
|
Indicates whether the model is SAM2 variant. |
task |
str
|
The task type, set to "segment" for SAM models. |
Methods:
Name | Description |
---|---|
predict |
Perform segmentation prediction on the given image or video source. |
info |
Log information about the SAM model. |
Examples:
>>> sam = SAM("sam_b.pt")
>>> results = sam.predict("image.jpg", points=[[500, 375]])
>>> for r in results:
>>> print(f"Detected {len(r.masks)} masks")
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
Path to the pre-trained SAM model file. File should have a .pt or .pth extension. |
'sam_b.pt'
|
Raises:
Type | Description |
---|---|
NotImplementedError
|
If the model file extension is not .pt or .pth. |
Examples:
>>> sam = SAM("sam_b.pt")
>>> print(sam.is_sam2)
Source code in ultralytics/models/sam/model.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
task_map
property
task_map: Dict[str, Dict[str, Type[Predictor]]]
Provide a mapping from the 'segment' task to its corresponding 'Predictor'.
Returns:
Type | Description |
---|---|
Dict[str, Dict[str, Type[Predictor]]]
|
A dictionary mapping the 'segment' task to its corresponding Predictor class. For SAM2 models, it maps to SAM2Predictor, otherwise to the standard Predictor. |
Examples:
>>> sam = SAM("sam_b.pt")
>>> task_map = sam.task_map
>>> print(task_map)
{'segment': {'predictor': <class 'ultralytics.models.sam.predict.Predictor'>}}
__call__
__call__(
source=None,
stream: bool = False,
bboxes=None,
points=None,
labels=None,
**kwargs
)
Perform segmentation prediction on the given image or video source.
This method is an alias for the 'predict' method, providing a convenient way to call the SAM model for segmentation tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
source
|
str | Image | ndarray | None
|
Path to the image or video file, or a PIL.Image object, or a numpy.ndarray object. |
None
|
stream
|
bool
|
If True, enables real-time streaming. |
False
|
bboxes
|
List[List[float]] | None
|
List of bounding box coordinates for prompted segmentation. |
None
|
points
|
List[List[float]] | None
|
List of points for prompted segmentation. |
None
|
labels
|
List[int] | None
|
List of labels for prompted segmentation. |
None
|
**kwargs
|
Any
|
Additional keyword arguments to be passed to the predict method. |
{}
|
Returns:
Type | Description |
---|---|
list
|
The model predictions, typically containing segmentation masks and other relevant information. |
Examples:
>>> sam = SAM("sam_b.pt")
>>> results = sam("image.jpg", points=[[500, 375]])
>>> print(f"Detected {len(results[0].masks)} masks")
Source code in ultralytics/models/sam/model.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
|
info
info(detailed: bool = False, verbose: bool = True)
Log information about the SAM model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
detailed
|
bool
|
If True, displays detailed information about the model layers and operations. |
False
|
verbose
|
bool
|
If True, prints the information to the console. |
True
|
Returns:
Type | Description |
---|---|
tuple
|
A tuple containing the model's information (string representations of the model). |
Examples:
>>> sam = SAM("sam_b.pt")
>>> info = sam.info()
>>> print(info[0]) # Print summary information
Source code in ultralytics/models/sam/model.py
138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
|
predict
predict(
source,
stream: bool = False,
bboxes=None,
points=None,
labels=None,
**kwargs
)
Perform segmentation prediction on the given image or video source.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
source
|
str | Image | ndarray
|
Path to the image or video file, or a PIL.Image object, or a numpy.ndarray object. |
required |
stream
|
bool
|
If True, enables real-time streaming. |
False
|
bboxes
|
List[List[float]] | None
|
List of bounding box coordinates for prompted segmentation. |
None
|
points
|
List[List[float]] | None
|
List of points for prompted segmentation. |
None
|
labels
|
List[int] | None
|
List of labels for prompted segmentation. |
None
|
**kwargs
|
Any
|
Additional keyword arguments for prediction. |
{}
|
Returns:
Type | Description |
---|---|
list
|
The model predictions. |
Examples:
>>> sam = SAM("sam_b.pt")
>>> results = sam.predict("image.jpg", points=[[500, 375]])
>>> for r in results:
... print(f"Detected {len(r.masks)} masks")
Source code in ultralytics/models/sam/model.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|