Deploy Your Models on Amazon SageMaker with vetiver
I’m excited to announce that vetiver 0.2.1 for Python and R are both available (from PyPI and CRAN, respectively). The vetiver framework for MLOps tasks provides fluent tooling to version, deploy, and monitor a trained model in either Python or R. Functions handle both recording and checking the model’s input data prototype and predicting from a remote API endpoint.
You can install vetiver 0.2.1 for Python with:
python -m pip install vetiverAnd for R with:
install.packages("vetiver")To see all the changes in vetiver 0.2.1, check out the release notes for Python or for R. You may be interested to learn about several updates:
The REST APIs generated by vetiver now include an endpoint at
/metadatato more easily access your model metadata. The vetiver framework creates some metadata automatically for your trained model, such as the packages used to train your model and a description. You can also store any custom metadata you need for your particular MLOps use case, for example, the model metrics you observed while developing your model.You can now deploy models using vetiver that were trained with spaCy, keras, and the luz API for torch.
As my colleague Gagan mentioned in his recent post, the vetiver R package now provides fluent support for deploying models to AWS SageMaker. We know this is a significant new integration, so let’s dig a bit deeper!
Deploy vetiver models to SageMaker
To deploy a model on SageMaker, start by creating a deployable model object with vetiver:
library(vetiver)
cars_lm <- lm(mpg ~ ., data = mtcars)
v <- vetiver_model(cars_lm, "cars-linear")This is clearly a very simple model, but you can deploy any of the numerous model types that vetiver supports.
Next, you store your model object as a pin in an S3 bucket. You need to use an existing bucket here:
library(pins)
## existing bucket:
identifier <- "sagemaker-vetiver-demo"
board <- board_s3(bucket = identifier)
vetiver_pin_write(board, v)Much like the function vetiver_deploy_rsconnect() for Posit Connect, there is a single function that deploys your model as a SageMaker model endpoint from here!
new_endpoint <-
vetiver_deploy_sagemaker(
board = board,
name = "cars-linear",
instance_type = "ml.t2.medium"
)This single function takes care of everything that needs to happen to deploy your model:
- First, it builds and pushes a custom Docker image for your model (including all the dependencies needed to make predictions) to ECR.
- Second, it creates a SageMaker model object.
- Last, it creates and deploys a SageMaker model endpoint.
There are more modular functions for these three steps available for advanced use cases. Check out my recent blog post and screencast for how to use these functions and to understand deploying a model to SageMaker more deeply.
Going further
Like I mentioned earlier, the vetiver/SageMaker integration currently supports models trained in R. We know that there is already good support for deploying Python models to SageMaker, but if you are interested in using vetiver for Python on SageMaker, chime in on this issue describing your use case and needs.
If you’re just getting started with SageMaker as an RStudio user, check out this post by my colleague James Blair.
If you want to learn more about using RStudio on SageMaker, including how to deploy models with vetiver, join my Posit colleagues Gagandeep Singh and Tom Mock on June 6 at 11am ET. Add the YouTube Premier to your calendar here.
Acknowledgments
I would like to particularly acknowledge the contributions of Dyfan Jones to the new SageMaker support in the vetiver R package. In addition to Dyfan, we’d like to thank all the folks who have contributed to vetiver so far, whether via filing issues or contributing code or documentation since the 0.2.0 release.
For Python:
@dbkegley, @has2k1, @isabelizimm, @josho88, @juliasilge, @krumeto, @machow, and @MartinBaumga
For R:
@dfalbel, @DyfanJones, @eleanor-m, @JosiahParry, @juliasilge, @nipnipj, @pa-nathaniel, @rdavis120, and @turalsadigov