Middle/Senior DevOps Engineer

I'm interested

Experience: 2-3+ years
Remote, full-time

Domain:

  • Print Publishing

The project is an AI-powered automation, optimization and prediction platform designed to help enterprise-level content publishers gain deeper insight into what their audiences need, make smarter decisions with their resources and get the most value out of their content.

Print laydown technology powers intuitive, automated page laydown through CMS partners. It automates the print production process in just minutes using smart AI that ensures printed pages look and feel as if they were produced by experienced editors and page designers. And it’s free of the rigid constraints of templates.

Project Specifications:

  • Back-end: Python;
  • Infrastructure: AWS, Docker, GitHub;
  • Other: Azure Devops (TFS).

What we expect:

  • proven experience in deploying and managing Kubernetes environments, specifically with AWS EKS;
  • hands-on experience with Docker for containerized environments;
  • expertise in implementing and managing CI/CD pipelines for Kubernetes using tools like ArgoCD, GitHub Actions, or Buildkite;
  • proficiency with Infrastructure as Code (IaC) using Terraform and Helm for templated deployments;
  • extensive experience with AWS services, including EC2, S3, RDS, Lambda, ECS, EKS, CloudFront, ECR and networking;
  • familiarity with Tableau server management and deployment in high-availability environments;
  • solid experience with data pipeline technologies, data streaming, and ETL workflows;
  • strong proficiency in Unix-based operating systems and shell scripting;
  • in-depth knowledge of computer networking (subnets, routes, IPs, ports) and database management, particularly Postgres (AWS RDS);
  • familiarity with monitoring tools such as DataDog and PagerDuty, and exposure to high-load data pipeline technologies like Kafka, Kinesis, and Spark;
  • strong understanding of security best practices for distributed systems, with the ability to balance security with operational efficiency;
  • experience with managing, tuning, and securing high-load data systems, with a focus on performance optimization and scalability;
  • experience with AzureAD and SSO integration for secure and efficient account management;
  • english level: Upper-intermediate.

What you will do:

  • collaborate with engineering teams to establish and optimize continuous delivery (CI/CD) environments and workflows;
  • manage and maintain a highly available Kubernetes-based product platform, primarily using AWS EKS;
  • automate and enhance build, deployment, and infrastructure processes;
  • monitor service health and availability with automated fault detection, alerting, and triage, ensuring timely manual or automated recovery;
  • manage and deploy Tableau Server in a high-availability environment, ensuring optimal performance and uptime;
  • develop automated installation solutions for containerized data pipelines;
  • drive initiatives to improve cloud infrastructure cost-efficiency;
  • design and implement improved infrastructure monitoring techniques;
  • containerize new data pipeline components using Docker;
  • integrate automated testing into deployment pipelines;
  • responsible for on-call hours to address and resolve any critical issues that arise outside of regular working hours (one switching team member covers 11am to 11pm EST (Toronto late morning until late evening), this goes on for all weekdays and weekends;
  • manage and maintain Snowflake environments and Datalake solutions, ensuring seamless integration with other data systems and workflows;
  • develop and implement disaster recovery plans and backup strategies to ensure business continuity;
  • create and maintain detailed technical documentation for systems, processes, and procedures.

Nice to have:

  • AI tools using.

 

Interview stages:

Manager

Ask your questions about the vacancy to a specialist

Apply for this job

    First name
    Last name
    Email
    Phone
    Description