1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1beta1
  6. getDeploymentResourcePool

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1beta1.getDeploymentResourcePool

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Get a DeploymentResourcePool.

Using getDeploymentResourcePool

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getDeploymentResourcePool(args: GetDeploymentResourcePoolArgs, opts?: InvokeOptions): Promise<GetDeploymentResourcePoolResult>
function getDeploymentResourcePoolOutput(args: GetDeploymentResourcePoolOutputArgs, opts?: InvokeOptions): Output<GetDeploymentResourcePoolResult>
Copy
def get_deployment_resource_pool(deployment_resource_pool_id: Optional[str] = None,
                                 location: Optional[str] = None,
                                 project: Optional[str] = None,
                                 opts: Optional[InvokeOptions] = None) -> GetDeploymentResourcePoolResult
def get_deployment_resource_pool_output(deployment_resource_pool_id: Optional[pulumi.Input[str]] = None,
                                 location: Optional[pulumi.Input[str]] = None,
                                 project: Optional[pulumi.Input[str]] = None,
                                 opts: Optional[InvokeOptions] = None) -> Output[GetDeploymentResourcePoolResult]
Copy
func LookupDeploymentResourcePool(ctx *Context, args *LookupDeploymentResourcePoolArgs, opts ...InvokeOption) (*LookupDeploymentResourcePoolResult, error)
func LookupDeploymentResourcePoolOutput(ctx *Context, args *LookupDeploymentResourcePoolOutputArgs, opts ...InvokeOption) LookupDeploymentResourcePoolResultOutput
Copy

> Note: This function is named LookupDeploymentResourcePool in the Go SDK.

public static class GetDeploymentResourcePool 
{
    public static Task<GetDeploymentResourcePoolResult> InvokeAsync(GetDeploymentResourcePoolArgs args, InvokeOptions? opts = null)
    public static Output<GetDeploymentResourcePoolResult> Invoke(GetDeploymentResourcePoolInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetDeploymentResourcePoolResult> getDeploymentResourcePool(GetDeploymentResourcePoolArgs args, InvokeOptions options)
public static Output<GetDeploymentResourcePoolResult> getDeploymentResourcePool(GetDeploymentResourcePoolArgs args, InvokeOptions options)
Copy
fn::invoke:
  function: google-native:aiplatform/v1beta1:getDeploymentResourcePool
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

DeploymentResourcePoolId This property is required. string
Location This property is required. string
Project string
DeploymentResourcePoolId This property is required. string
Location This property is required. string
Project string
deploymentResourcePoolId This property is required. String
location This property is required. String
project String
deploymentResourcePoolId This property is required. string
location This property is required. string
project string
deployment_resource_pool_id This property is required. str
location This property is required. str
project str
deploymentResourcePoolId This property is required. String
location This property is required. String
project String

getDeploymentResourcePool Result

The following output properties are available:

CreateTime string
Timestamp when this DeploymentResourcePool was created.
DedicatedResources Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleCloudAiplatformV1beta1DedicatedResourcesResponse
The underlying DedicatedResources that the DeploymentResourcePool uses.
Name string
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
CreateTime string
Timestamp when this DeploymentResourcePool was created.
DedicatedResources GoogleCloudAiplatformV1beta1DedicatedResourcesResponse
The underlying DedicatedResources that the DeploymentResourcePool uses.
Name string
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
createTime String
Timestamp when this DeploymentResourcePool was created.
dedicatedResources GoogleCloudAiplatformV1beta1DedicatedResourcesResponse
The underlying DedicatedResources that the DeploymentResourcePool uses.
name String
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
createTime string
Timestamp when this DeploymentResourcePool was created.
dedicatedResources GoogleCloudAiplatformV1beta1DedicatedResourcesResponse
The underlying DedicatedResources that the DeploymentResourcePool uses.
name string
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
create_time str
Timestamp when this DeploymentResourcePool was created.
dedicated_resources GoogleCloudAiplatformV1beta1DedicatedResourcesResponse
The underlying DedicatedResources that the DeploymentResourcePool uses.
name str
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}
createTime String
Timestamp when this DeploymentResourcePool was created.
dedicatedResources Property Map
The underlying DedicatedResources that the DeploymentResourcePool uses.
name String
Immutable. The resource name of the DeploymentResourcePool. Format: projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}

Supporting Types

GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse

MetricName This property is required. string
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
Target This property is required. int
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
MetricName This property is required. string
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
Target This property is required. int
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
metricName This property is required. String
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
target This property is required. Integer
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
metricName This property is required. string
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
target This property is required. number
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
metric_name This property is required. str
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
target This property is required. int
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
metricName This property is required. String
The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
target This property is required. Number
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

GoogleCloudAiplatformV1beta1DedicatedResourcesResponse

AutoscalingMetricSpecs This property is required. List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse>
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
MachineSpec This property is required. Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecResponse
Immutable. The specification of a single machine used by the prediction.
MaxReplicaCount This property is required. int
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
MinReplicaCount This property is required. int
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
AutoscalingMetricSpecs This property is required. []GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
MachineSpec This property is required. GoogleCloudAiplatformV1beta1MachineSpecResponse
Immutable. The specification of a single machine used by the prediction.
MaxReplicaCount This property is required. int
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
MinReplicaCount This property is required. int
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
autoscalingMetricSpecs This property is required. List<GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse>
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
machineSpec This property is required. GoogleCloudAiplatformV1beta1MachineSpecResponse
Immutable. The specification of a single machine used by the prediction.
maxReplicaCount This property is required. Integer
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
minReplicaCount This property is required. Integer
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
autoscalingMetricSpecs This property is required. GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse[]
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
machineSpec This property is required. GoogleCloudAiplatformV1beta1MachineSpecResponse
Immutable. The specification of a single machine used by the prediction.
maxReplicaCount This property is required. number
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
minReplicaCount This property is required. number
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
autoscaling_metric_specs This property is required. Sequence[GoogleCloudAiplatformV1beta1AutoscalingMetricSpecResponse]
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
machine_spec This property is required. GoogleCloudAiplatformV1beta1MachineSpecResponse
Immutable. The specification of a single machine used by the prediction.
max_replica_count This property is required. int
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
min_replica_count This property is required. int
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
autoscalingMetricSpecs This property is required. List<Property Map>
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
machineSpec This property is required. Property Map
Immutable. The specification of a single machine used by the prediction.
maxReplicaCount This property is required. Number
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
minReplicaCount This property is required. Number
Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

GoogleCloudAiplatformV1beta1MachineSpecResponse

AcceleratorCount This property is required. int
The number of accelerators to attach to the machine.
AcceleratorType This property is required. string
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
MachineType This property is required. string
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
TpuTopology This property is required. string
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
AcceleratorCount This property is required. int
The number of accelerators to attach to the machine.
AcceleratorType This property is required. string
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
MachineType This property is required. string
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
TpuTopology This property is required. string
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
acceleratorCount This property is required. Integer
The number of accelerators to attach to the machine.
acceleratorType This property is required. String
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
machineType This property is required. String
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
tpuTopology This property is required. String
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
acceleratorCount This property is required. number
The number of accelerators to attach to the machine.
acceleratorType This property is required. string
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
machineType This property is required. string
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
tpuTopology This property is required. string
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
accelerator_count This property is required. int
The number of accelerators to attach to the machine.
accelerator_type This property is required. str
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
machine_type This property is required. str
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
tpu_topology This property is required. str
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
acceleratorCount This property is required. Number
The number of accelerators to attach to the machine.
acceleratorType This property is required. String
Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
machineType This property is required. String
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
tpuTopology This property is required. String
Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi