Marknadens största urval
Snabb leverans

MLOps Engineering at Scale

Om MLOps Engineering at Scale

Deploying a machine learning model into a fully realized production system usually requires painstaking work by an operations team creating and managing custom servers. Cloud Native Machine Learning helps you bridge that gap by using the pre-built services provided by cloud platforms like Azure and AWS to assemble your ML system's infrastructure. Following a real-world use case for calculating taxi fares, you'll learn how to get a serverless ML pipeline up and running using AWS services. Clear and detailed tutorials show you how to develop reliable, flexible, and scalable machine learning systems without time-consuming management tasks or the costly overheads of physical hardware.about the technologyYour new machine learning model is ready to put into production, and suddenly all your time is taken up by setting up your server infrastructure. Serverless machine learning offers a productivity-boosting alternative. It eliminates the time-consuming operations tasks from your machine learning lifecycle, letting out-of-the-box cloud services take over launching, running, and managing your ML systems. With the serverless capabilities of major cloud vendors handling your infrastructure, you're free to focus on tuning and improving your models.about the bookCloud Native Machine Learning is a guide to bringing your experimental machine learning code to production using serverless capabilities from major cloud providers. You'll start with best practices for your datasets, learning to bring VACUUM data-quality principles to your projects, and ensure that your datasets can be reproducibly sampled. Next, you'll learn to implement machine learning models with PyTorch, discovering how to scale up your models in the cloud and how to use PyTorch Lightning for distributed ML training. Finally, you'll tune and engineer your serverless machine learning pipeline for scalability, elasticity, and ease of monitoring with the built-in notification tools of your cloud platform. When you're done, you'll have the tools to easily bridge the gap between ML models and a fully functioning production system. what's insideExtracting, transforming, and loading datasetsQuerying datasets with SQLUnderstanding automatic differentiation in PyTorchDeploying trained models and pipelines as a service endpointMonitoring and managing your pipeline's life cycleMeasuring performance improvementsabout the readerFor data professionals with intermediate Python skills and basic familiarity with machine learning. No cloud experience required.about the authorCarl Osipov has spent over 15 years working on big data processing and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world's foremost experts in machine learning and also helped manage the company's efforts to democratize artificial intelligence. You can learn more about Carl from his blog Clouds With Carl.

Visa mer
  • Språk:
  • Engelska
  • ISBN:
  • 9781617297762
  • Format:
  • Häftad
  • Sidor:
  • 250
  • Utgiven:
  • 16. mars 2022
  • Mått:
  • 234x187x24 mm.
  • Vikt:
  • 628 g.
  I lager
Leveranstid: 4-7 vardagar
Förväntad leverans: 10. december 2024
Förlängd ångerrätt till 31. januari 2025

Beskrivning av MLOps Engineering at Scale

Deploying a machine learning model into a fully realized production system usually requires painstaking work by an operations team creating and managing custom servers. Cloud Native Machine Learning helps you bridge that gap by using the pre-built services provided by cloud platforms like Azure and AWS to assemble your ML system's infrastructure. Following a real-world use case for calculating taxi fares, you'll learn how to get a serverless ML pipeline up and running using AWS services. Clear and detailed tutorials show you how to develop reliable, flexible, and scalable machine learning systems without time-consuming management tasks or the costly overheads of physical hardware.about the technologyYour new machine learning model is ready to put into production, and suddenly all your time is taken up by setting up your server infrastructure. Serverless machine learning offers a productivity-boosting alternative. It eliminates the time-consuming operations tasks from your machine learning lifecycle, letting out-of-the-box cloud services take over launching, running, and managing your ML systems. With the serverless capabilities of major cloud vendors handling your infrastructure, you're free to focus on tuning and improving your models.about the bookCloud Native Machine Learning is a guide to bringing your experimental machine learning code to production using serverless capabilities from major cloud providers. You'll start with best practices for your datasets, learning to bring VACUUM data-quality principles to your projects, and ensure that your datasets can be reproducibly sampled. Next, you'll learn to implement machine learning models with PyTorch, discovering how to scale up your models in the cloud and how to use PyTorch Lightning for distributed ML training. Finally, you'll tune and engineer your serverless machine learning pipeline for scalability, elasticity, and ease of monitoring with the built-in notification tools of your cloud platform. When you're done, you'll have the tools to easily bridge the gap between ML models and a fully functioning production system. what's insideExtracting, transforming, and loading datasetsQuerying datasets with SQLUnderstanding automatic differentiation in PyTorchDeploying trained models and pipelines as a service endpointMonitoring and managing your pipeline's life cycleMeasuring performance improvementsabout the readerFor data professionals with intermediate Python skills and basic familiarity with machine learning. No cloud experience required.about the authorCarl Osipov has spent over 15 years working on big data processing and machine learning in multi-core, distributed systems, such as service-oriented architecture and cloud computing platforms. While at IBM, Carl helped IBM Software Group to shape its strategy around the use of Docker and other container-based technologies for serverless computing using IBM Cloud and Amazon Web Services. At Google, Carl learned from the world's foremost experts in machine learning and also helped manage the company's efforts to democratize artificial intelligence. You can learn more about Carl from his blog Clouds With Carl.

Användarnas betyg av MLOps Engineering at Scale



Hitta liknande böcker
Boken MLOps Engineering at Scale finns i följande kategorier:

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.