static CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.builder() |
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.id(String id) |
A unique identifier for the new inference endpoint.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.instanceCount(Integer instanceCount) |
The minimum number of Amazon EC2 instances to deploy to an endpoint for prediction.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.instanceType(String instanceType) |
The type of Neptune ML instance to use for online servicing.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.mlModelTrainingJobId(String mlModelTrainingJobId) |
The job Id of the completed model-training job that has created the model that the inference endpoint will
point to.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.mlModelTransformJobId(String mlModelTransformJobId) |
The job Id of the completed model-transform job.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.modelName(String modelName) |
Model type for training.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.neptuneIamRoleArn(String neptuneIamRoleArn) |
The ARN of an IAM role providing Neptune access to SageMaker and Amazon S3 resources.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.overrideConfiguration(Consumer<AwsRequestOverrideConfiguration.Builder> builderConsumer) |
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.overrideConfiguration(AwsRequestOverrideConfiguration overrideConfiguration) |
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.toBuilder() |
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.update(Boolean update) |
If set to true, update indicates that this is an update request.
|
CreateMlEndpointRequest.Builder |
CreateMlEndpointRequest.Builder.volumeEncryptionKMSKey(String volumeEncryptionKMSKey) |
The Amazon Key Management Service (Amazon KMS) key that SageMaker uses to encrypt data on the storage volume
attached to the ML compute instances that run the training job.
|