Engineer Ii
4 settimane fa
Are you passionate about leveraging technology for a greater good?
Join us as a Data Engineer if you're eager to make impactful contributions in a field where technology meets sustainability. We're looking for professionals who are committed to our cause of saving energy and reducing food waste through digital solutions.
Welcome to Alsense
Our team is dedicated to developing a digital IoT platform designed to significantly enhance energy efficiency and prevent food loss. This tool is not just a product; it's a movement towards a more sustainable and responsible global footprint. Joining us means contributing to a practical and impactful solution that addresses critical environmental issues.
Current Data Team Composition
The team currently consists of four members engaged in various data engineering/device integration tasks. As these activities become more defined, responsibilities are splitting, requiring specific profiles to meet our business needs. The team is responsible for development and maintenance of data solutions and tasked with experimentation and research of new technologies and tools in the Data & Analytics domain.
Job ResponsibilitiesGiven the expanding scope of our projects, the new team member should have a software engineering background with 3 to 5 years' experience in the data engineering field. You would be responsible for developing data products needed for the new Domain-driven based design of data landscape and help with the Data Lake setup for future ML projects.
Your day-to-day responsibilities will include:
Developing and optimizing scalable data pipelines and data products.Collaborating with other teams to define, design, and ship new features.Ensuring great code quality that helps our customers solve real-world problems.Receiving feedback from our users and making our product even better.Minimum Requirements:
• Understanding of software engineering principles: scalability, extensibility, readability, testability, etc.
• Proficiency in SQL and Python
• Experience in building data products and deploying data pipelines
• Familiarity with Docker, Kubernetes, and Azure Cloud
• Experience with test frameworks like pytest, unittest, etc., and data quality frameworks like Great Expectations, Griffin, Tero, etc., and writing test automation (unit, integration, end-to-end, contract tests)
• Experience in any of the Data orchestration tools Mage, Airflow, Dagster, etc.
• Familiarity with Stream processing platforms like Kafka, Flink, Benthos or similar would be a plus
• Knowledge of Apache Spark and Iceberg
Nice to haves
1. Familiarity with Open Telemetry, gRPC framework, and PostgreSQL would be awesome, but it is not mandatory. Also, programming languages like Go, Java/Scala knowledge is a plus.
2. Familiarity with designing and developing data lakes.
3. Knowledge of MPP databases like Star Rocks, and Query engines like Trino is a plus.
4. Knowledge of Google Building Ontology.
What We OfferYou'll join a supportive environment where your ideas can help shape what we build and how we work. We are committed to your professional development and ensuring you have the opportunities to thrive and make a significant environmental impact.
Employee BenefitsWe are excited to offer you the following benefits with your employment:
Bonus systemPaid vacationPossibility to work remotelyPension planPersonal insuranceCommunication packageOpportunity to join Employee Resource GroupsState of the art virtual work environmentEmployee Referral ProgramThis list does not promise or guarantee any particular benefit or specific action. They may depend on country or contract specifics and are subject to change at any time without prior notice.
#J-18808-Ljbffr