使用Cortex把PyTorch模型部署到生產中
master 26branches36tags Go to fileAdd fileCode
Latest commit
vishalbolluUpdate batch api sample images (Git stats
Files
Type Name Latest commit message Commit time .circleci Override the number of parallel jobs for the CI machine (#1421) 15 days ago .github Update docs links (#1095) 5 months ago build Upload zipped cli to S3 (#1457) 4 days agoREADME.md
install•documentation•examples•we're hiring•chat with us
Model serving at scale
Deploy
- Deploy TensorFlow, PyTorch, ONNX, scikit-learn, and other models.
- Define preprocessing and postprocessing steps in Python.
- Configure APIs as realtime or batch.
- Deploy multiple models per API.
Manage
- Monitor API performance and track predictions.
- Update APIs with no downtime.
- Stream logs from APIs.
- Perform A/B tests.
Scale
- Test locally, scale on your AWS account.
- Autoscale to handle production traffic.
- Reduce cost with spot instances.
How it works
Write APIs in Python
Define any real-time or batch inference pipeline as simple Python APIs, regardless of framework.
# predictor.py
from transformers import pipeline
class PythonPredictor:
def __init__(self, config):
self.model = pipeline(task="text-generation")
def predict(self, payload):
return self.model(payload["text"])[0]
Configure infrastructure in YAML
Configure autoscaling, monitoring, compute resources, update strategies, and more.
# cortex.yaml
- name: text-generator
predictor:
path: predictor.py
networking:
api_gateway: public
compute:
gpu: 1
autoscaling:
min_replicas: 3
Scale to handle production traffic
Handle traffic with request-based autoscaling. Minimize spend with spot instances and multi-model APIs.
$ cortex get text-generator endpoint: https://example.com/text-generator status last-update replicas requests latency live 10h 10 100000 100ms
Integrate with your stack
Integrate Cortex with any data science platform and CI/CD tooling, without changing your workflow.
# predictor.py
import tensorflow
import torch
import transformers
import mlflow
...
Run on your AWS account
Run Cortex on your AWS account (GCP support is coming soon), maintaining control over resource utilization and data access.
# cluster.yaml
region: us-west-2
instance_type: g4dn.xlarge
spot: true
min_instances: 1
max_instances: 5
Focus on machine learning, not DevOps
You don't need to bring your own cluster or containerize your models, Cortex automates your cloud infrastructure.
$ cortex cluster up
confguring networking ...
configuring logging ...
configuring metrics ...
configuring autoscaling ...
cortex is ready!
Get started
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"
See ourinstallation guide, then deploy one of ourexamplesor bring your own models to buildrealtime APIsandbatch APIs.
About
Deploy machine learning in production
cortex.devResources
ReadmeLicense
Apache-2.0 LicenseReleases36
v0.20.0Latest 21 days ago + 35 releasesContributors16
+ 5 contributorsLanguages
- Go88.1%
- Python6.3%
- Shell3.7%
- Other1.9%