Are you passionate about building robust data infrastructure and enabling organizations to harness the power of data? Join our team as a Data Engineer, where you will play a critical role in designing, implementing, and maintaining scalable data pipelines and systems.
In this role, you will work closely with data scientists, analysts, and other stakeholders to understand business requirements and translate them into efficient, scalable data solutions. Your primary responsibilities will include collecting, organizing, and analyzing large datasets, as well as optimizing data systems and architecture for performance and reliability.
Key Responsibilities:
Design, build, and maintain efficient ETL/ELT pipelines to process structured and unstructured data. Develop and optimize data models, ensuring high performance and reliability in analytics workflows. Collaborate with cross-functional teams to identify data needs and propose innovative solutions. Ensure data quality, governance, and security standards are adhered to across the pipeline. Monitor and troubleshoot performance issues, resolving bottlenecks in real-time. Stay updated on the latest trends and tools in data engineering to drive continuous improvement. Qualifications:
Proven experience as a Data Engineer or similar role, with expertise in SQL and at least one programming language (e.g., Python, Java, Scala). Hands-on experience with data pipeline tools (e.g., Apache Airflow, AWS Glue, Azure Data Factory) and big data frameworks (e.g., Hadoop, Spark). Strong understanding of database systems (e.g., SQL Server, PostgreSQL, NoSQL databases). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes). Excellent problem-solving skills and attention to detail. If you thrive in a fast-paced, innovative environment and have a passion for transforming raw data into actionable insights, we’d love to hear from you. Join us and help shape the future of data-driven decision-making.