vllm.multimodal.inputs ¶
AudioItem module-attribute ¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor.
Alternatively, a tuple (audio, sampling_rate), where the sampling rate is different from that expected by the model; these are resampled to the model's sampling rate before being processed by HF.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as audio embeddings; these are directly passed to the model without HF processing.
BatchedTensorInputs module-attribute ¶
BatchedTensorInputs: TypeAlias = dict[str, NestedTensors]
A dictionary containing nested tensors which have been batched via MultiModalKwargsItems.get_data.
HfAudioItem module-attribute ¶
Represents a single audio item, which can be passed to a HuggingFace AudioProcessor.
HfImageItem module-attribute ¶
A transformers.image_utils.ImageInput representing a single image item, which can be passed to a HuggingFace ImageProcessor.
HfVideoItem module-attribute ¶
HfVideoItem: TypeAlias = Union[
list["Image"],
ndarray,
"torch.Tensor",
list[ndarray],
list["torch.Tensor"],
]
A transformers.image_utils.VideoInput representing a single video item, which can be passed to a HuggingFace VideoProcessor.
ImageItem module-attribute ¶
ImageItem: TypeAlias = Union[
HfImageItem,
"torch.Tensor",
"MediaWithBytes[HfImageItem]",
]
A transformers.image_utils.ImageInput representing a single image item, which can be passed to a HuggingFace ImageProcessor.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as image embeddings; these are directly passed to the model without HF processing.
ModalityData module-attribute ¶
Either a single data item, or a list of data items. Can only be None if UUID is provided.
The number of data items allowed per modality is restricted by --limit-mm-per-prompt.
MultiModalDataDict module-attribute ¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by MultiModalDataBuiltins.
MultiModalKwargsOptionalItems module-attribute ¶
MultiModalKwargsOptionalItems: TypeAlias = (
MultiModalKwargsItems[MultiModalKwargsItem]
| MultiModalKwargsItems[MultiModalKwargsItem | None]
)
MultiModalPlaceholderDict module-attribute ¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing placeholder ranges for each modality.
MultiModalUUIDDict module-attribute ¶
A dictionary containing user-provided UUIDs for items in each modality. If a UUID for an item is not provided, its entry will be None and MultiModalHasher will compute a hash for the item.
The UUID will be used to identify the item for all caching purposes (input processing caching, embedding caching, prefix caching, etc).
NestedTensors module-attribute ¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
VideoItem module-attribute ¶
VideoItem: TypeAlias = Union[
HfVideoItem,
"torch.Tensor",
tuple[HfVideoItem, dict[str, Any]],
]
A transformers.video_utils.VideoInput representing a single video item. This can be passed to a HuggingFace VideoProcessor with transformers.video_utils.VideoMetadata.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as video embeddings; these are directly passed to the model without HF processing.
_I module-attribute ¶
_I = TypeVar(
"_I",
MultiModalKwargsItem,
MultiModalKwargsItem | None,
default=MultiModalKwargsItem,
)
BaseMultiModalField dataclass ¶
Bases: ABC
Defines how to interpret tensor data belonging to a keyword argument in MultiModalKwargs for multiple multi-modal items, and vice versa.
Source code in vllm/multimodal/inputs.py
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 | |
keep_on_cpu class-attribute instance-attribute ¶
keep_on_cpu: bool = False
If True, then this field is excluded from being moved to the accelerator when MultiModalKwargsItems.get_data() is called to batch the data.
_field_factory ¶
Source code in vllm/multimodal/inputs.py
_reduce_data abstractmethod ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems abstractmethod ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Construct MultiModalFieldElem instances to represent the provided data.
This is the inverse of reduce_data.
Source code in vllm/multimodal/inputs.py
reduce_data ¶
reduce_data(
elems: list[MultiModalFieldElem],
*,
device: Device = None,
pin_memory: bool = False,
) -> NestedTensors
Merge the data from multiple instances of MultiModalFieldElem.
This is the inverse of build_elems.
Source code in vllm/multimodal/inputs.py
MultiModalBatchedField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
MultiModalDataBuiltins ¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
MultiModalEncDecInputs ¶
Bases: MultiModalInputs
Represents the outputs of EncDecMultiModalProcessor ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
MultiModalFeatureSpec dataclass ¶
Represents a single multimodal input with its processed data and metadata.
Used by the V1 engine to track multimodal data through processing and caching. A request containing multiple multimodal items will have one MultiModalFeatureSpec per item.
Source code in vllm/multimodal/inputs.py
mm_position instance-attribute ¶
mm_position: PlaceholderRange
e.g., PlaceholderRange(offset=2, length=336)
__init__ ¶
__init__(
data: Optional[MultiModalKwargsItem],
modality: str,
identifier: str,
mm_position: PlaceholderRange,
) -> None
gather_kwargs staticmethod ¶
gather_kwargs(
features: list[MultiModalFeatureSpec], keys: set[str]
)
Source code in vllm/multimodal/inputs.py
MultiModalFieldConfig dataclass ¶
Source code in vllm/multimodal/inputs.py
565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 | |
batched staticmethod ¶
Defines a field where an element in the batch is obtained by indexing into the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
key: str, batch: NestedTensors
) -> Sequence[MultiModalFieldElem]
flat staticmethod ¶
flat(
modality: str,
slices: Sequence[slice] | Sequence[Sequence[slice]],
dim: int = 0,
*,
keep_on_cpu: bool = False,
)
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
slices | Sequence[slice] | Sequence[Sequence[slice]] | For each multi-modal item, a slice (dim=0) or a tuple of slices (dim>0) that is used to extract the data corresponding to it. | required |
dim | int | The dimension to extract data, default to 0. | 0 |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
slices: [slice(0, 3), slice(3, 7), slice(7, 9)]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [
(slice(None), slice(0, 3)),
(slice(None), slice(3, 7)),
(slice(None), slice(7, 9))]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
flat_from_sizes staticmethod ¶
flat_from_sizes(
modality: str,
size_per_item: Tensor,
dim: int = 0,
*,
keep_on_cpu: bool = False,
)
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
size_per_item | Tensor | For each multi-modal item, the size of the slice that is used to extract the data corresponding to it. | required |
dim | int | The dimension to slice, default to 0. | 0 |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
size_per_item: [3, 4, 2]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
size_per_item: [3, 4, 2]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
shared staticmethod ¶
Defines a field where an element in the batch is obtained by taking the entirety of the underlying data.
This means that the data is the same for each element in the batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modality | str | The modality of the multi-modal item that uses this keyword argument. | required |
batch_size | int | The number of multi-modal items which share this data. | required |
keep_on_cpu | bool | Whether to keep this field on the CPU for the model inputs. | False |
Example:
Given:
batch_size: 4
Input:
Data: [XYZ]
Output:
Element 1: [XYZ]
Element 2: [XYZ]
Element 3: [XYZ]
Element 4: [XYZ]
Source code in vllm/multimodal/inputs.py
MultiModalFieldElem dataclass ¶
Represents a keyword argument corresponding to a multi-modal item in MultiModalKwargs.
Source code in vllm/multimodal/inputs.py
data instance-attribute ¶
data: NestedTensors
The tensor data of this field in MultiModalKwargs, i.e. the value of the keyword argument to be passed to the model.
It may be set to None if it is determined that the item is cached in EngineCore.
field instance-attribute ¶
field: BaseMultiModalField
Defines how to combine the tensor data of this field with others in order to batch multi-modal items together for model inference.
key instance-attribute ¶
key: str
The key of this field in MultiModalKwargs, i.e. the name of the keyword argument to be passed to the model.
modality instance-attribute ¶
modality: str
The modality of the corresponding multi-modal item. Each multi-modal item can consist of multiple keyword arguments.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
__init__ ¶
__init__(
modality: str,
key: str,
data: NestedTensors,
field: BaseMultiModalField,
) -> None
MultiModalFlatField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
__init__ ¶
__init__(
*,
keep_on_cpu: bool = False,
slices: Sequence[slice] | Sequence[Sequence[slice]],
dim: int = 0,
) -> None
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Source code in vllm/multimodal/inputs.py
MultiModalInputs ¶
Bases: TypedDict
Represents the outputs of BaseMultiModalProcessor, ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
cache_salt instance-attribute ¶
cache_salt: NotRequired[str]
Optional cache salt to be used for prefix caching.
mm_kwargs instance-attribute ¶
mm_kwargs: MultiModalKwargsOptionalItems
Keyword arguments to be directly passed to the model after batching.
mm_placeholders instance-attribute ¶
mm_placeholders: MultiModalPlaceholderDict
For each modality, information about the placeholder tokens in prompt_token_ids.
prompt_token_ids instance-attribute ¶
The processed token IDs which includes placeholder tokens.
MultiModalKwargs ¶
Bases: UserDict[str, NestedTensors]
A dictionary that represents the keyword arguments to torch.nn.Module.forward.
Source code in vllm/multimodal/inputs.py
__eq__ ¶
from_hf_inputs staticmethod ¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_items staticmethod ¶
from_items(
items: Sequence[MultiModalKwargsItem],
*,
pin_memory: bool = False,
)
Source code in vllm/multimodal/inputs.py
MultiModalKwargsItem ¶
Bases: UserDict[str, MultiModalFieldElem]
A collection of MultiModalFieldElem corresponding to a data item in MultiModalDataItems.
Source code in vllm/multimodal/inputs.py
__init__ ¶
__init__(
data: Mapping[str, MultiModalFieldElem] = {},
) -> None
Source code in vllm/multimodal/inputs.py
dummy staticmethod ¶
Convenience class for testing.
Source code in vllm/multimodal/inputs.py
from_elems staticmethod ¶
from_elems(elems: Sequence[MultiModalFieldElem])
MultiModalKwargsItems ¶
Bases: UserDict[str, Sequence[_I]]
A dictionary of MultiModalKwargsItems by modality.
Source code in vllm/multimodal/inputs.py
832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 | |
__getitem__ ¶
Source code in vllm/multimodal/inputs.py
from_hf_inputs staticmethod ¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_seq staticmethod ¶
from_seq(items: Sequence[MultiModalKwargsItem])
get_data ¶
get_data(
*, device: Device = None, pin_memory: bool = False
) -> BatchedTensorInputs
Construct a dictionary of keyword arguments to pass to the model.
Source code in vllm/multimodal/inputs.py
require_data ¶
require_data() -> MultiModalKwargsItems[
MultiModalKwargsItem
]
Source code in vllm/multimodal/inputs.py
MultiModalSharedField dataclass ¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data ¶
_reduce_data(
batch: list[NestedTensors], *, pin_memory: bool
) -> NestedTensors
build_elems ¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
PlaceholderRange dataclass ¶
Placeholder location information for multi-modal data.
Example:
Prompt: AAAA BBBB What is in these images?
Images A and B will have:
Source code in vllm/multimodal/inputs.py
is_embed class-attribute instance-attribute ¶
A boolean mask of shape (length,) indicating which positions between offset and offset + length to assign embeddings to.
__eq__ ¶
Source code in vllm/multimodal/inputs.py
_nested_tensors_h2d ¶
_nested_tensors_h2d(
tensors: NestedTensors, device: Device
) -> NestedTensors
Source code in vllm/multimodal/inputs.py
batched_tensors_equal ¶
batched_tensors_equal(
a: BatchedTensorInputs, b: BatchedTensorInputs
) -> bool
Equality check between BatchedTensorInputs objects.
Source code in vllm/multimodal/inputs.py
nested_tensors_equal ¶
nested_tensors_equal(
a: NestedTensors, b: NestedTensors
) -> bool
Equality check between NestedTensors objects.