About the project and product
Our customer is a leading quality public transport provider in the UK, mainland Europe and North America. Around 4 million customers a day trust our greener, smarter and better value bus and rail services – and we are continuing to grow. New ideas and new ways of delivering transport are what set them apart from other competitors. Focusing on innovation is helping make buses and trains more efficient, easier to use and more attractive to customers.
Job description, responsibilities, and duties
You will become a part of our Data Warehouse team that is responsible for implementing data transformations with data pipelines as well as ingress and egress integrations from or to our Data Warehouse.
We do all of this using state of the art open source and cloud technologies that allow us to build scalable solutions that are capable of handling millions of data points every day.
What you can expect from the role:
You will be responsible for the whole DevOps infrastructure (starting with Jenkins, continuing with Airflow, AWS and ending with anything in between).
AWS and Airflow will become your bread and butter for maintaining the majority of our ETL workflows.
Outcomes of your work will have a direct impact on how thousands of bus drivers transport hundreds of millions of people every year.
What we expect from you:
We expect you to become a reliable part of our team, collaborate with your team members and strive for doing a high-quality work.
As an Experienced/Senior DevOps Engineer Integration you will be responsible for:
- Delivering and supporting of large, business critical integration pipelines and systems
- Creating infrastructure components that deliver secure services
- Developing systems for monitoring, alerting and measuring system performance
- Maintaining and improving the deployment pipelines
- Analyzing and supporting developer infrastructure needs for various applications
We’ll provide you all the support you need to make it happen!
Job location: Kosice/full remote
Key areas of interest and knowledge expected from the candidate:
- An understanding of common IT infrastructure concepts
- Ability to troubleshoot and analyze performance and operational issues in complex infrastructure
- Ability to find your way in a large infrastructure setup by reading IaC code (we’re looking for a candidate who are able to do their DevOps tasks independently)
- Professional experience with AWS, Terraform, Linux
- Experience with Jenkins, Docker
- Experience with ETL tools/platforms is a strong plus (Airflow / MWAA or other)
- Experience with monitoring and alerting systems – Cloudwatch, Splunk
- Experience with AWS Transfer Family and API Gateway in AWS is a strong plus
- Scripting in Python / Shell
- Familiar with Atlassian products (Jira, Confluence, Bitbucket)
- Redshift DB
*Salary range based on caissarecruitment.com