Recruiter Brief¶
Recruiter-friendly screening view
Entry-level MLOps candidate with production-minded project evidence¶
3 minutes: career-switcher with 14 years of operations experience, now building the infrastructure that makes ML models reliable. That is the whole story.
I am looking for my first formal role in ML/MLOps. My strongest fit is an entry-level / junior role where I can contribute to model serving, ML workflow reliability, documentation, monitoring and deployment support while learning from experienced engineers.
My background is unusual for an entry-level candidate: before moving into data science and MLOps, I spent 14 years running business operations. That experience shows up in the way I think about cost, reliability, ownership, customer impact and clear handoffs.
Quick Screening Snapshot¶
Target level
Entry-level / junior MLOps & Production ML¶
I am looking for a role with room to learn, contribute and grow into stronger production ML ownership.
Location
Mexico City / Remote¶
Open to remote-first opportunities, especially with US, Mexico or LATAM teams where written technical communication matters.
Languages
Spanish native, English B2¶
Comfortable with technical documentation, async collaboration and interview conversations in English with preparation.
Education
TripleTen Data Science, 2026¶
Formal training layer supporting the portfolio projects and MLOps transition. Hands-on AWS (EKS, ECR, IRSA, Terraform) exercised across the portfolio infrastructure code.
Seniority alignment: I am fully aligned with entry-level / junior role scope, compensation bands, code review expectations and growth plans. My operations maturity is a contribution to the team, not a seniority claim.
Key Proof Points¶
Reusable system
Production template¶
A starter framework for FastAPI, Docker, Kubernetes, MLflow, CI/CD and deployment guardrails.
Incident diagnosis
81% errors to 0%¶
The clearest debugging story: measured failure, root cause, fix and verification.
Cost judgment
Cloud paused by design¶
The runtime was paused to control cost; the code, evidence and reactivation path remain documented.
Engineering proof
395+ tests¶
CI validates code, docs, infrastructure checks, smoke paths and project quality gates before deploy.
Best-Fit Roles¶
Primary fit
Entry-level MLOps / Production ML¶
Model serving, Docker/Kubernetes artifacts, CI/CD support, monitoring, MLflow hygiene, deployment notes and reliability improvements.
Also strong
Entry-level ML Engineer / AI Engineer I¶
Applied ML roles where model work needs APIs, testing, documentation and clear handoff into an engineering workflow.
Adjacent path
ML Platform or Data Engineering¶
Teams working on ML pipelines, feature workflows, batch jobs, validation, cloud runtime support or production data paths.
Why The Background Matters¶
Many entry-level ML candidates can train models in notebooks. My portfolio is built around the next layer: what happens when a model needs an API, tests, deployment artifacts, monitoring, cost decisions and documentation another person can review.
The 14 years in operations are not a substitute for engineering experience. They are a multiplier for how I approach engineering work: I care about evidence, clarity, process, trade-offs and systems that can survive real team usage.
Positioning
Entry-level / junior in formal ML/MLOps employment, but mature in ownership, communication, cost awareness and operating discipline.
What Makes Me Different¶
Cost awareness
I think in trade-offs¶
My operations background makes budget, scope and maintenance part of the engineering discussion instead of an afterthought.
Ownership
I document decisions¶
The portfolio includes ADRs, runbooks and status pages so reviewers can see why decisions were made, not only what was built.
Debugging
I measure before guessing¶
The strongest technical story is an API failure that moved from 81% errors to 0% after isolating the serving-pattern root cause.
Reliability
I care about the operating layer¶
Tests, deployment paths, monitoring, model packaging and current-status communication are first-class parts of the work.
First 90 Days Contribution¶
Days 1-30
Learn and document the workflow¶
Run the stack locally, understand the model lifecycle, map deployment steps, document gaps and fix small onboarding or test issues.
Days 31-60
Contribute to delivery support¶
Help with FastAPI endpoints, validation checks, MLflow hygiene, CI/CD tasks, Docker/Kubernetes artifacts or monitoring improvements under review.
Days 61-90
Own a focused reliability improvement¶
Take one scoped improvement from issue to documentation: smoke tests, readiness checks, drift notes, runbooks, cost tracking or deployment evidence.
What To Look For In The Portfolio¶
Project judgment
Reusable MLOps template¶
The strongest project is the production template: a reusable starting point for ML services with serving, testing, deployment and workflow guardrails.
Debugging ability
Measured incident writeups¶
The portfolio includes load testing, inference-path debugging and documented trade-offs rather than only final model metrics.
Communication
Evidence a team can review¶
Architecture notes, model cards, runbooks, deployment evidence and current portfolio status are written so both technical and non-technical reviewers can understand the story.
Suggested Screening Questions¶
Debugging
Ask about the 81% API error rate¶
The important signal is the diagnosis process: how I moved from symptoms to root cause, fixed the serving path and verified the result.
Product thinking
Ask why I built the template¶
The template shows how I converted repeated portfolio lessons into reusable guardrails for future ML services.
Cost judgment
Ask about GCP vs AWS trade-offs¶
The cloud comparison is useful because it connects technical deployment evidence with operating cost and scope control.
Self-awareness
Ask what I would improve next¶
This opens the most honest conversation: where the portfolio is strong, where it is still controlled evidence, and how I would evolve it on a team.
What I Am Building Next¶
Live evidence
More real traffic windows¶
Run short, cost-controlled live demos to capture fresh Grafana, Prometheus and MLflow evidence without leaving infrastructure online permanently.
Collaboration
More public review signals¶
Add external feedback, PR review examples or open-source contributions so the portfolio shows how I work with other engineers.
Depth
One deeper infrastructure writeup¶
Expand one operational topic, such as monitoring or deployment strategy, into a concise trade-off article.
Domain fit
A project closer to operations¶
Explore a future project around inventory, staffing, cost anomalies or operations forecasting, where my previous background is a direct advantage.
Current Boundaries And Next Proof¶
Live traffic
Controlled load tests, not 24/7 users¶
The strongest runtime evidence comes from controlled load tests and live development windows, not persistent production user traffic.
Cloud ML platforms
GCP and AWS Kubernetes first¶
My cloud work is centered on GKE, EKS, Terraform and kubectl. SageMaker and Azure ML are not yet core strengths.
ML depth
Engineering and deployment side¶
I am not presenting myself as an ML researcher. My strongest signal is turning applied models into testable, operable systems.
External collaboration
Next public signal¶
Open-source contribution, external review or a PR review sample is the next proof I want to add.
Useful Links¶
Production Template¶
The reusable MLOps project that best summarizes the portfolio.
Technical Evidence¶
Deeper proof for technical hiring managers.