Afero (https://www.afero.io) provides a complete, end-to-end platform for the rapid development and deployment of secure, performant, cost-effective, and easy-to-use consumer IoT devices—from major appliances to high-volume commodity hardware to hobby projects and prototypes.
The beating heart of it all is the Afero Cloud, and we’re looking for a stellar DevOps Engineer to apply their passion for automation, measurement, and traceability as a key member of our Cloud Engineering team. The ideal candidate will play a central role in operating the next generation of the Afero Cloud and will help to shape processes and procedures for the entire operations lifecycle—from deployment to monitoring to triage to postmortem—continuously improving reliability and reducing toil.
What You’ll Do
- Work closely with our service engineers to maintain, improve, and scale our existing infrastructure.
- Work with the team to introduce new technologies and develop and improve automated procedures to continuously improve platform resilience.
- Work with the team to develop and maintain backup and disaster recovery procedures, and play a key role in ensuring disaster preparedness.
- Participate in an on-call rotation for our production services, and play a central role in blameless post mortem incident analysis.
Why You’ll Want to Do It
For a small team, Afero engineers collectively do a little bit of everything—from cloud applications and infrastructure to mobile development on multiple platforms, firmware on a wide range of devices, to board-level hardware design and implementation of secure wireless devices. Everyone here can follow their curiosity to broader horizons. If you enjoy deep dives into assorted technology stacks to understand end-to-end system workflows, can adapt past experiences to new solutions and can communicate ideas clearly among diverse audiences, Afero is a fantastic opportunity to explore and to expand your expertise.
Who We’re Looking For
The range of disciplines and experiences represented at Afero is broad and deep, and everyone on the team contributes a distinct perspective. This is the culture we strive to sustain as we grow, and expect successful DevOps candidates to bring unique combinations of skills and experience, broadly drawn from:
- 5+ years in applied systems engineering and administration.
- Experience building, optimizing and troubleshooting Kubernetes-orchestrated production environments on GCP, AWS, Azure, or similar platforms.
- Database management experience, preferably with MySQL.
- Thorough understanding of networking protocols and technologies such as TCP/IP, HTTP, DNS, IPSec, and VPN.
- Working knowledge of and experience with VPC Network Design, routing, and regional deployment.
- CLI and shell scripting fluency, plus practical experience with at least one other scripting language, such as Python or Ruby.
- Experience leveraging CI/CD tools such as Jenkins, Travis, FluxCD, and the like for automated build and deployment pipelines.
- Extensive experience with source control systems, their role in configuration management, and strategies for leveraging them as part of the CI/CD process.
- Experience in managing IAM Roles and Permissions Management.
- Experience setting up, configuring, and maintaining monitoring tools and applications such as Grafana, New Relic, DataDog, etc.
- We deploy our code as Docker containers Kubernetes-orchestrated clusters, using a CI/CD pipeline driven by Git and Jenkins.
- We write our code in Java and Typescript and use MySQL, Neo4J, and BigQuery databases.
- We monitor our services using New Relic, StackDriver, Grafana, and other tools.