sdeval.fidelity.ccip

Overview:

CCIP-based metrics for anime character training.

See imgutils.metrics.ccip for more information.

CCIPMetrics

class sdeval.fidelity.ccip.CCIPMetrics(images: Union[PIL.Image.Image, str, List[Union[PIL.Image.Image, str]]], feats: Optional[numpy.ndarray] = None, model: str = 'ccip-caformer-24-randaug-pruned', threshold: Optional[float] = None, silent: bool = False, tqdm_desc: str = None)[source]

Class for calculating similarity scores between images using the CCIP (Content-Consistent Image Pairwise) metric.

The CCIPMetrics class allows you to calculate the similarity score between a set of images and a reference dataset using the CCIP metric.

Parameters:
  • images (ImagesTyping) – The reference dataset of images for initializing CCIP metrics.

  • feats (Optional[np.ndarray]) – Feature data of given character, should be (B, 768). When assigned, images argument will be ignored.

  • model (str) – The CCIP model to use for feature extraction. Default is ‘ccip-caformer-24-randaug-pruned’.

  • threshold (Optional[float]) – The threshold for the CCIP metric. If not provided, the default threshold for the chosen model is used.

  • silent (bool) – If True, suppresses progress bars and additional output during initialization and calculation.

  • tqdm_desc (str) – Description for the tqdm progress bar during initialization and calculation.

__init__(images: Union[PIL.Image.Image, str, List[Union[PIL.Image.Image, str]]], feats: Optional[numpy.ndarray] = None, model: str = 'ccip-caformer-24-randaug-pruned', threshold: Optional[float] = None, silent: bool = False, tqdm_desc: str = None)[source]

Initialize self. See help(type(self)) for accurate signature.

score(images: Union[PIL.Image.Image, str, List[Union[PIL.Image.Image, str]]], silent: bool = None, mode: Literal[mean, seq] = 'mean') → Union[float, numpy.ndarray][source]

Calculate the similarity score between the reference dataset and a set of input images.

This method calculates the similarity score between the reference dataset (used for initialization) and a set of input images using the CCIP metric.

Parameters:
  • images (ImagesTyping) – The set of input images for calculating CCIP metrics.

  • silent (bool) – If True, suppresses progress bars and additional output during calculation.

  • mode (Literal['mean', 'seq']) – Mode of the return value. Return a float value when mean is assigned, return a numpy array when seq is assigned. Default is mean.

Returns:

The similarity score between the reference dataset and the input images.

Return type:

Union[float, np.ndarray]